-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SRM experiment #82
Open
denisfouchard
wants to merge
3
commits into
main
Choose a base branch
from
srm-experiment
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,211 @@ | ||
# -*- coding: utf-8 -*- | ||
|
||
""" | ||
Hyperalignment-base prediction using Hugo Richard's FastSRM. | ||
See article : https://doi.org/10.1162/imag_a_00032 | ||
|
||
========================== | ||
|
||
In this tutorial, we show how to better predict new contrasts for a target | ||
subject using corresponding contrasts from multiple source subjects. For this purpose, | ||
we create a template to which we align the target subject, using shared information. | ||
We then predict new images for the target and compare them to a baseline. | ||
|
||
We mostly rely on Python common packages and on nilearn to handle | ||
functional data in a clean fashion. | ||
|
||
|
||
To run this example, you must launch IPython via ``ipython | ||
--matplotlib`` in a terminal, or use ``jupyter-notebook``. | ||
|
||
.. contents:: **Contents** | ||
:local: | ||
:depth: 1 | ||
|
||
""" | ||
|
||
############################################################################### | ||
# Retrieve the data | ||
# ----------------- | ||
# In this example we use the IBC dataset, which includes a large number of | ||
# different contrasts maps for 12 subjects. | ||
# We download the images for subjects sub-01, sub-02, sub-04, sub-05, sub-06 | ||
# and sub-07 (or retrieve them if they were already downloaded). | ||
# imgs is the list of paths to available statistical images for each subjects. | ||
# df is a dataframe with metadata about each of them. | ||
# mask is a binary image used to extract grey matter regions. | ||
# | ||
|
||
from fmralign.fetch_example_data import fetch_ibc_subjects_contrasts | ||
|
||
imgs, df, mask_img = fetch_ibc_subjects_contrasts( | ||
["sub-01", "sub-02", "sub-04", "sub-05", "sub-06", "sub-07"] | ||
) | ||
|
||
############################################################################### | ||
# Definine a masker | ||
# ----------------- | ||
# We define a nilearn masker that will be used to handle relevant data. | ||
# For more information, visit : | ||
# 'https://nilearn.github.io/stable/manipulating_images/masker_objects.html' | ||
# | ||
|
||
from nilearn.maskers import NiftiMasker | ||
|
||
masker = NiftiMasker(mask_img=mask_img).fit() | ||
|
||
############################################################################### | ||
# Prepare the data | ||
# ---------------- | ||
# For each subject, we will use two series of contrasts acquired during | ||
# two independent sessions with a different phase encoding: | ||
# Antero-posterior(AP) or Postero-anterior(PA). | ||
# | ||
|
||
|
||
# To infer a template for subjects sub-01 to sub-06 for both AP and PA data, | ||
# we make a list of 4D niimgs from our list of list of files containing 3D images | ||
|
||
from nilearn.image import concat_imgs | ||
|
||
template_train = [] | ||
for i in range(6): | ||
template_train.append(concat_imgs(imgs[i])) | ||
target_train = df[df.subject == "sub-07"][df.acquisition == "ap"].path.values | ||
|
||
# For subject sub-07, we split it in two folds: | ||
# - target train: sub-07 AP contrasts, used to learn alignment to template | ||
# - target test: sub-07 PA contrasts, used as a ground truth to score predictions | ||
# We make a single 4D Niimg from our list of 3D filenames | ||
|
||
target_train = concat_imgs(target_train) | ||
target_train_data = masker.transform(target_train) | ||
target_test = df[df.subject == "sub-07"][df.acquisition == "pa"].path.values | ||
|
||
############################################################################### | ||
# Compute a baseline (average of subjects) | ||
# ---------------------------------------- | ||
# We create an image with as many contrasts as any subject representing for | ||
# each contrast the average of all train subjects maps. | ||
# | ||
|
||
import numpy as np | ||
|
||
masked_imgs = [masker.transform(img) for img in template_train] | ||
average_img = np.mean(masked_imgs[:-1], axis=0) | ||
average_subject = masker.inverse_transform(average_img) | ||
|
||
############################################################################### | ||
# Create a template from the training subjects. | ||
# --------------------------------------------- | ||
# We define an estimator using the class TemplateAlignment: | ||
# * We align the whole brain through 'multiple' local alignments. | ||
# * These alignments are calculated on a parcellation of the brain in 150 pieces, | ||
# this parcellation creates group of functionnally similar voxels. | ||
# * The template is created iteratively, aligning all subjects data into a | ||
# common space, from which the template is inferred and aligning again to this | ||
# new template space. | ||
# | ||
|
||
from nilearn.image import index_img | ||
from fastsrm.identifiable_srm import IdentifiableFastSRM | ||
|
||
|
||
train_index = range(53) | ||
denisfouchard marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
|
||
train_imgs = np.array(masked_imgs)[:, train_index, :] | ||
train_imgs = [x.T for x in train_imgs] | ||
|
||
srm = IdentifiableFastSRM( | ||
denisfouchard marked this conversation as resolved.
Show resolved
Hide resolved
|
||
n_components=5, | ||
tol=1e-10, | ||
aggregate="mean", | ||
verbose=True, | ||
n_iter=10, | ||
) | ||
|
||
srm.fit(train_imgs) | ||
W = np.array(srm.basis_list) | ||
shared_train = np.array(srm.transform(train_imgs)) | ||
|
||
# %% | ||
############################################################################### | ||
# Predict new data for left-out subject | ||
# ------------------------------------- | ||
# We use target_train data to fit the transform, indicating it corresponds to | ||
# the contrasts indexed by train_index and predict from this learnt alignment | ||
# contrasts corresponding to template test_index numbers. | ||
# For each train subject and for the template, the AP contrasts are sorted from | ||
# 0, to 53, and then the PA contrasts from 53 to 106. | ||
denisfouchard marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# | ||
test_index = range(53, 106) | ||
|
||
test_imgs = np.array(masked_imgs)[:, test_index, :][:-1] | ||
test_imgs = [x.T for x in test_imgs] | ||
|
||
# We extract the basis from the target subject in order to project it in the shared | ||
# space of the template | ||
backward = np.linalg.pinv(W[-1]) | ||
|
||
shared_test = np.array(srm.transform(test_imgs)).T | ||
pred = shared_test.dot(backward) | ||
|
||
# %% | ||
prediction_from_template = masker.inverse_transform(pred) | ||
|
||
# As a baseline prediction, let's just take the average of activations across subjects. | ||
prediction_from_average = index_img(average_subject, test_index) | ||
|
||
# %% | ||
|
||
############################################################################### | ||
# Score the baseline and the prediction | ||
# ------------------------------------- | ||
# We use a utility scoring function to measure the voxelwise correlation | ||
# between the prediction and the ground truth. That is, for each voxel, we | ||
# measure the correlation between its profile of activation without and with | ||
# alignment, to see if alignment was able to predict a signal more alike the ground truth. | ||
# | ||
|
||
from fmralign.metrics import score_voxelwise | ||
|
||
# Now we use this scoring function to compare the correlation of predictions | ||
# made from group average and from template with the real PA contrasts of sub-07 | ||
|
||
average_score = masker.inverse_transform( | ||
score_voxelwise(target_test, prediction_from_average, masker, loss="corr") | ||
) | ||
|
||
# I choose abs value in reference to the work we did with the INT | ||
template_score = masker.inverse_transform( | ||
score_voxelwise(target_test, prediction_from_template, masker, loss="corr") | ||
) | ||
|
||
############################################################################### | ||
# Plotting the measures | ||
# --------------------- | ||
# Finally we plot both scores | ||
# | ||
|
||
from nilearn import plotting | ||
|
||
baseline_display = plotting.plot_stat_map( | ||
average_score, display_mode="z", vmax=1, cut_coords=[-15, -5] | ||
) | ||
baseline_display.title("Group average correlation wt ground truth") | ||
display = plotting.plot_stat_map( | ||
template_score, display_mode="z", cut_coords=[-15, -5], vmax=1 | ||
) | ||
display.title("SRM-based prediction correlation wt ground truth") | ||
|
||
############################################################################### | ||
# We observe that creating a template and aligning a new subject to it yields | ||
# a prediction that is better correlated with the ground truth than just using | ||
# the average activations of subjects. | ||
# | ||
|
||
plotting.show() | ||
|
||
|
||
# %% |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason why we want to have
sub-07
intemplate_train
at all, rather than only intarget_train
andtarget_test
? I know we exclude the last entry later so that it's not actually included in e.g.,average_img
, but that's one extra bit of overhead for the reader to keep in mind.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand ? We need the data from sub-07 in the training to get its basis, so we need to concat the imgs at some point. Maybe you mean changing the same of this list to simply imgs ? But it would be inconsistant with the INT experiment ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be your suggestion @emdupre ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I see it would be to use
add_subjects
. @denisfouchard can you try and do that ?