Skip to content

Commit

Permalink
Iter on furo support
Browse files Browse the repository at this point in the history
  • Loading branch information
emdupre committed Nov 1, 2024
1 parent da2feaa commit 8ee611d
Show file tree
Hide file tree
Showing 14 changed files with 128 additions and 35 deletions.
52 changes: 43 additions & 9 deletions AUTHORS.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,51 @@ This work is made available by a community of people, including:


Funding
--------
.......

This project has received funding from the European Unions Horizon
This project has received funding from the European Union's Horizon
2020 Research and Innovation Programme under Grant Agreement No. 785907
(HBP SGA2) and `Digiteo <https://digiteo.fr>`_
(HBP SGA2) and `Digiteo <https://digiteo.fr>`_.

.. _citing:

Citing Nilearn
--------------------
This package can be lightweight and efficient because it relies on great toolboxes (Nilearn, Scikit-Learn).
A huge amount of work goes into Nilearn.
In addition, the Parietal team needs the citations to the paper in order to justify paying a software engineer on the project.
To guarantee the future of the toolkit, if you use it, please cite `this paper <https://hal.inria.fr/hal-01093971>`_ .
Citing Nilearn and scikit-learn
-------------------------------

This package can be lightweight and efficient because it relies on great toolboxes,
notably :nilearn:`nilearn <>` and :sklearn:`scikit-learn <>`.

A huge amount of work goes into both of these packages.
Researchers who invest their time in developing and maintaining the package
deserve recognition with citations.
In addition, the :inria:`INRIA MIND Project Team <mind>` needs citations
to the paper in order to justify paying a software engineer on the project.
To guarantee the future of the toolkit, if you use it, please cite it.

Citing nilearn
..............

If you want to cite Nilearn, we suggest you do it using our Zenodo DOI:

.. code-block:: bibtex
@software{Nilearn,
author = {Nilearn contributors},
license = {BSD-4-Clause},
title = {{nilearn}},
url = {https://github.com/nilearn/nilearn},
doi = {https://doi.org/10.5281/zenodo.8397156}
}
Nilearn's Research Resource Identifier (RRID) is:
`RRID:SCR_001362 <https://scicrunch.org/resources/data/record/nlx_144509-1/SCR_001362/resolver?q=nilearn&l=nilearn&i=rrid:scr_001362>`_

There is no paper published about nilearn.
However, the patterns underlying the package have been described in:
`Machine learning for neuroimaging with scikit-learn <https://doi.org/10.3389/fninf.2014.00014>`_.


Citing scikit-learn
...................
To cite :sklearn:`scikit-learn <>`, please see the scikit-learn documentation on
:sklearn:`how to cite <about.html#citing-scikit-learn>`.
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@
extlinks = {
"sklearn": ("https://scikit-learn.org/stable/%s", None),
"inria": ("https://team.inria.fr/%s", None),
"nilearn-gh": ("https://github.com/nilearn/nilearn/%s", None),
"nilearn": ("https://nilearn.github.io/stable/%s", None),
"nipy": ("https://nipy.org/%s", None),
}

Expand Down
1 change: 1 addition & 0 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@ Featured examples
:caption: Development

authors.rst
whats_new.rst
GitHub Repository <https://github.com/parietal-INRIA/fmralign>


Expand Down
57 changes: 57 additions & 0 deletions doc/references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
@inproceedings{Bazeille2019,
title = "Local Optimal Transport for Functional Brain Template Estimation",
author = "Bazeille, T and Richard, H and Janati, H and Thirion, B",
booktitle = "Information Processing in Medical Imaging",
publisher = "Springer International Publishing",
pages = "237--248",
year = 2019
}

@article{Bazeille2021,
title = "An empirical evaluation of functional alignment using
inter-subject decoding",
author = "Bazeille, Thomas and DuPre, Elizabeth and Richard, Hugo and
Poline, Jean-Baptiste and Thirion, Bertrand",
journal = "Neuroimage",
pages = 118683,
month = oct,
year = 2021,
}

@article{Richard2019,
title = "Fast shared response model for {fMRI} data",
author = "Richard, Hugo and Martin, Lucas and Pinho, Ana Luisa and
Pillow, Jonathan and Thirion, Bertrand",
journal = "arXiv [cs.CV]",
month = sep,
year = 2019,
archivePrefix = "arXiv",
primaryClass = "cs.CV"
}

@inproceedings{Thual2022,
title = "Aligning individual brains with Fused Unbalanced
Gromov-Wasserstein",
author = "Thual, Alexis and Tran, Huy and Zemskova, Tatiana and Courty,
Nicolas and Flamary, Rémi and Dehaene, Stanislas and Thirion,
Bertrand",
booktitle = "Advances in Neural Information Processing Systems 35 (NeurIPS
2022)",
month = jun,
year = 2022
}

@article{Ma2023,
author = {Feilong, Ma and Nastase, Samuel A. and Jiahui, Guo and Halchenko, Yaroslav O. and Gobbini, M. Ida and Haxby, James V.},
title = "{The individualized neural tuning model: Precise and generalizable cartography of functional architecture in individual brains}",
journal = {Imaging Neuroscience},
volume = {1},
pages = {1-34},
year = {2023},
month = {11},
abstract = "{Quantifying how brain functional architecture differs from person to person is a key challenge in human neuroscience. Current individualized models of brain functional organization are based on brain regions and networks, limiting their use in studying fine-grained vertex-level differences. In this work, we present the individualized neural tuning (INT) model, a fine-grained individualized model of brain functional organization. The INT model is designed to have vertex-level granularity, to capture both representational and topographic differences, and to model stimulus-general neural tuning. Through a series of analyses, we demonstrate that (a) our INT model provides a reliable individualized measure of fine-grained brain functional organization, (b) it accurately predicts individualized brain response patterns to new stimuli, and (c) for many benchmarks, it requires only 10–20 minutes of data for good performance. The high reliability, specificity, precision, and generalizability of our INT model affords new opportunities for building brain-based biomarkers based on naturalistic neuroimaging paradigms.}",
issn = {2837-6056},
doi = {10.1162/imag_a_00032},
url = {https://doi.org/10.1162/imag\_a\_00032},
eprint = {https://direct.mit.edu/imag/article-pdf/doi/10.1162/imag\_a\_00032/2183252/imag\_a\_00032.pdf},
}
5 changes: 5 additions & 0 deletions doc/soft_references.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
@misc{CreativeCommons,
title = {Licence: Creative Commons Attribution Non-commercial Share Alike},
howpublished = {\url{http://creativecommons.org/licenses/by-nc-sa/2.5/}},
note = {Accessed: 2021-05-11}
}
5 changes: 5 additions & 0 deletions doc/whats_new.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@

.. _whats_new:

==========
What's new
==========

0.1
===
Expand Down
5 changes: 3 additions & 2 deletions examples/plot_alignment_methods_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,7 @@
# The training set, used to learn alignment from source subject toward target:
# * source train: AP contrasts for subject sub-01
# * target train: AP contrasts for subject sub-02
#

source_train = df[df.subject == "sub-01"][df.acquisition == "ap"].path.values
target_train = df[df.subject == "sub-02"][df.acquisition == "ap"].path.values
Expand All @@ -89,6 +90,7 @@
# the corresponding contrasts of subject sub-01
# * target test: PA contrasts for subject sub-02, used as a ground truth
# to score our predictions
#

source_test = df[df.subject == "sub-01"][df.acquisition == "pa"].path.values
target_test = df[df.subject == "sub-02"][df.acquisition == "pa"].path.values
Expand Down Expand Up @@ -155,5 +157,4 @@
# (no alignment). Ridge is the best performing method, followed by Optimal
# Transport. If you use Ridge though, be careful about the smooth predictions
# it yields.

# sphinx_gallery_thumbnail_number = 5
#
3 changes: 1 addition & 2 deletions examples/plot_alignment_simulated_2D_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
# * tick_params which are a set of plotting parameters
# * _plot2D_samples_mat to plot 2D alignment matrix as matching between distributions.
# * _plot_distributions_and_alignment to plot both the distributions and the matchings
#
# Now you can skip this part.

import math
Expand Down Expand Up @@ -231,5 +232,3 @@ def _plot_distributions_and_alignment(
X, Y, R=ot_alignment.R, title="Optimal Transport", thr=0.1
)
_plot_mixing_matrix(R=ot_alignment.R, title="Optimal Transport coupling")

# sphinx_gallery_thumbnail_number = 7
17 changes: 6 additions & 11 deletions examples/plot_int_alignment.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# -*- coding: utf-8 -*-

"""
Co-smoothing Prediction using the IndividualNeuralTuning Model.
See article : https://doi.org/10.1162/imag_a_00032
==========================
Co-smoothing prediction using the Individual Neural Tuning Model
================================================================
In this tutorial, we show how to better predict new contrasts for a target
subject using many source subjects corresponding contrasts. For this purpose,
Expand All @@ -18,7 +16,7 @@
To run this example, you must launch IPython via ``ipython
--matplotlib`` in a terminal, or use ``jupyter-notebook``.
"""
# %%

import warnings

warnings.filterwarnings("ignore")
Expand All @@ -41,7 +39,7 @@

###############################################################################
# Define a masker
# -----------------
# ---------------
# We define a nilearn masker that will be used to handle relevant data.
# For more information, visit :
# 'https://nilearn.github.io/stable/manipulating_images/masker_objects.html'
Expand Down Expand Up @@ -143,7 +141,7 @@
target_tuning = model._tuning_estimator(
shared_response=stimulus_[train_index], target=target_denoised_data
)
# %%

# We input the mapping image target_train in a list, we could have input more
# than one subject for which we'd want to predict : [train_1, train_2 ...]

Expand All @@ -166,7 +164,7 @@
# measure the correlation between its profile of activation without and with
# alignment, to see if alignment was able to predict a signal more alike the ground truth.
#
# %%

from fmralign.metrics import score_voxelwise

# Now we use this scoring function to compare the correlation of predictions
Expand All @@ -188,7 +186,6 @@
# Finally we plot both scores
#

# %%
from nilearn import plotting

baseline_display = plotting.plot_stat_map(
Expand All @@ -207,5 +204,3 @@
#

plotting.show()

# %%
1 change: 1 addition & 0 deletions examples/plot_pairwise_alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,3 +133,4 @@
###############################################################################
# We can see on the plot that after alignment the prediction made for one
# subject data, informed by another subject are greatly improved.
#
1 change: 0 additions & 1 deletion examples/plot_template_alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -175,4 +175,3 @@
# We observe that creating a template and aligning a new subject to it yields
# a prediction that is better correlated with the ground truth than just using
# the average activations of subjects.
#
12 changes: 3 additions & 9 deletions examples/plot_toy_int_experiment.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
# -*- coding: utf-8 -*-

"""
Co-smoothing Prediction using the IndividualNeuralTuning Model.
See article : https://doi.org/10.1162/imag_a_00032
==========================
Individual Neural Tuning Model on simulated data
================================================
This is a toy experiment to test Individual Tuning Model (INT) on two parts of the
data (or different runs) to assess the validity of tuning computation. This code has
Expand All @@ -15,7 +13,7 @@
To run this example, you must launch IPython via ``ipython
--matplotlib`` in a terminal, or use ``jupyter-notebook``.
"""
# %%

import numpy as np
import matplotlib.pyplot as plt
from fmralign.alignment_methods import IndividualizedNeuralTuning as INT
Expand All @@ -27,7 +25,6 @@
matrix_MDS,
)

# %%
###############################################################################
# Generate the data
# -----------------
Expand Down Expand Up @@ -69,7 +66,6 @@

parcels = [range(n_voxels)]

# %%
#############################################################################
# Create two independant instances of the model
# ---------------------------------------------
Expand Down Expand Up @@ -101,7 +97,6 @@

data_pred = int1.transform(data_run_2)

# %%
###############################################################################
# Plotting validation metrics
# ---------------------------
Expand Down Expand Up @@ -206,5 +201,4 @@
)

plt.tight_layout()
# %%
plt.show()
1 change: 1 addition & 0 deletions fmralign/pairwise_alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -279,6 +279,7 @@ def __init__(
'all CPUs', -2 'all CPUs but one', and so on.
verbose: integer, optional (default = 0)
Indicate the level of verbosity. By default, nothing is printed.
"""
self.n_pieces = n_pieces
self.alignment_method = alignment_method
Expand Down
1 change: 1 addition & 0 deletions fmralign/template_alignment.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,6 +331,7 @@ def __init__(
'all CPUs', -2 'all CPUs but one', and so on.
verbose: integer, optional (default = 0)
Indicate the level of verbosity. By default, nothing is printed.
"""
self.template = None
self.template_history = None
Expand Down

0 comments on commit 8ee611d

Please sign in to comment.