Releases: ME-ICA/tedana
24.0.2
Release Notes
Enhancements
-
Generate metrics from external regressors using F stats by @handwerkerd in #1064.
We have added in a long-requested addition to functionality. It is now possible to provide aTSV
file with time series the same length as the fMRI time series, fit those time series to each ICA component, and use that information in the process to decide which components to accept or reject. For example, it is possible to use head motion regressors, cardiac and respiratory, regressors, and region-of-interest based regressors in models. With this functionality, it is now possible to combine the echo-based methods oftedana
with other ICA-based denoising methods that depend on fitting to time series. Best practices for how to apply this new functionality are still a work-in-progress, but by adding this functionality, any use can start testing and contributing to this effort without needing to edit code. More information is available in https://tedana.readthedocs.build/en/stable/building_decision_trees.html#external-regressor-configuration -
Adding robustica option to ICA decomposition to achieve consistent results by @BahmanTahayori in #1013.
tedana
previously used a single iteration ofFastICA
. This works but it means that the results are sensitive to initial seed selection. We have added in an option to use RobustICA which runsFastICA
multiple times and outputs more stable components. As part of this process, if the PCA step defines, X components,robustica
often finds fewer stable components and will output fewer than X ICA components. A benefit of this is that our PCA-based component estimation methods sometimes fail. By givingrobustica
a plausible number of PCA components, it will find a stable number of ICA components leading to a more stable and less arbitrary result. We are still working on improving the stability of the step that initially defines the number of PCA components.
🐛 Bug Fixes
- Use nearest-neighbors interpolation in
plot_component
by @tsalo in #1098 - Filter out non-diagonal affine warning by @tsalo in #1103
- Refactor
gscontrol
module by @tsalo in #1086
Documentation Changes
- Cleaning up installation instructions by @handwerkerd in #1113
- Update list of multi-echo datasets by @tsalo in #1115
- Link to the open-multi-echo-data website by @tsalo in #1117
- documentation and resource updates by @handwerkerd in #1114
- docs: add BahmanTahayori as a contributor for code, design, and ideas by @allcontributors in #1123
- docs: add Lestropie as a contributor for code, design, and 2 more by @allcontributors in #1124
- Update figure-generating notebook by @tsalo in #1074
Other Changes
New Contributors
- @BahmanTahayori made their first contribution in #1013
- @Lestropie made their first contribution in #1013
Full Changelog: 24.0.1...24.0.2
24.0.1
Release Notes
Summary
- We use bokeh to generate interactive figures in our html report. bokeh v 3.4.0 included some under-documented changes that caused tedana to crash and other problems with interactivity. This is fixed in #1068
- We create an adaptive mask that lists the number of "good echoes" in each voxel. This is used so that a voxel with a single good echo is retained, but steps that include fitting values across echoes are limited to voxels with more good echoes. We noticed a few places where our description of how the adpative mask was created didn't match what was actually happening. These are bugs that needed to get fixed, but our underlying threshold are arbitrary. If these fixes are adversely affecting the retained voxels, please please up and we can examine tweaking thresholds or adding other options.
- The adaptive mask uses percentiles across all voxel values as part of a threshold calculation. Only voxels within a a general or user-supplied mask were supposed to be used, but we were applying that mask after calculating percentiles. This is now fixed (#1060). In practice this will slightly raise thresholds and reduce the number of voxels that survive the adaptive mask. An example of how this could alter the mask is here.
- The adapative mask was intended to store the last good echo. That is, a voxel where echoes 1 and 3 are above threshold should be
3
in the adaptive mask even if echo 2 was below threshold. The code was just counting the number of good echoes and storing2
. This is fixed in #1061
- As part of the adaptive masking bug fixes we also added another option for calculating an adaptive mask ( #1057 ) that removes voxels where later echoes have larger values than earlier echoes. This might be useful for typical multi-echo data, but could cause problems if echoes are temporally close and there is a downward trend, but not every value decreases. As part of this addition, there is now a
--masktype
option where one can inputdropout
, our current and default method,decay
this new method, or both. - We have also added several long-requested visualizations to our html report. Descriptions of the new visualizations are in our documentation The additions include:
- A visualization of the adaptive mask to show which voxels are retained in the optimally combined image and which are used T2* and S0 fits as part of the ICA denoising process. #1073
- Mean T2* and S0* fits are calculated and used as weights for the optimal combination of echoes. We were not calculating nor saving the fit quality. The root mean square error (RMSE) for this fits are now saved and presented in our report #1044
- As part of previously added visualizations and these new visualizations, we belatedly realized we were using an parameter that was only added to
nilearn
in v0.10.3 so we've raised the minimum accepted version number fornilearn
#1094
🐛 Bug Fixes
- Limit current adaptive mask method to brain mask by @tsalo in #1060
- Identify the last good echo in adaptive mask instead of sum of good echoes by @tsalo in #1061
- FIX load user-defined mask as expected by plot_adaptive_mask by @mvdoc in #1079
- Fix dynamic reports with bokeh 3.4.0 problems by @handwerkerd in #1068
- minimum nilearn to 0.10.3 by @handwerkerd in #1094
Changes
- Add
--masktype
option to control adaptive mask method by @tsalo in #1057 - Add adaptive mask plot to report by @tsalo in #1073
- Output RMSE map and time series for decay model fit by @tsalo in #1044
- [DOC] desc-optcomDenoised -> desc-denoised by @mvdoc in #1080
- docs: add mvdoc as a contributor for code, bug, and doc by @allcontributors in #1082
New Contributors
Full Changelog: 24.0.0...24.0.1
24.0.0
Release Notes
Summary
We have continued to make under-the-hood changes and improvements to documentation.
Several key changes may be noticable to users.
- By default, tedana has been saving 4D volumes of the high-kappa components
Accepted_bold.nii.gz
and the low-kappa components 'Rejected_bold.nii.gzeven though very few people use them and they use a lot of space. These will now only be saved if the program is run with
--verbose. Additionally our final denoised time series was called
desc-optcomDenoised_bold.nii.gzand this created confusion. It is now called
desc-denoised_bold.nii.gz`. This will break pipelines that looked for a file with the
previous name. #1033 - We noticed a small difference between the decision tree implemented in
MEICA v2.5 and the tree we were callingkundu
.
We have renamed our existing treetedana_orig
and there is now ameica
tree that should match the
MEICA method. In practice, the results will be identical ormeica
will accept additional components.
The additionally accepted components can have substantive variance and, upon visual inspection usually
looked like they should have been rejected. Therefore, we've kept the same default, but give both options
to users. #952 - Different metrics, like kappa and rho, are calculated for each ICA component. While the code allowed
for a range of different metrics, the list that was calculated when tedana was run was impossible to
change without editing the code. The metrics that were already specified in the decision tree json files
will now be the ones calculated. The actual metric calculations still need to be defined within the code,
but this change makes it practical to add a range of additional metrics that can vary by decision tree.
#969 - The
tedana_report.html
file now includes the mean T2* and S0 maps used in calculations
#1040, consistent orientations for all images of
brain slices #1045, version numbers for key python
packages used during execution #1014, and the reference
list is now properly rendered #1001.
Changes
- Remove accepted and rejected time series from the standard file outputs and change denoised time series name (updated) by @martaarbizu and @handwerkerd in #1033
- Generate metrics based on decision tree by @tsalo and @handwerkerd in #969
- Align with old meica by @handwerkerd and @tsalo in #952
- Add T2* and S0 figures by @tsalo in #1040
- Improve component plots by @tsalo in #1045
- Adding python library version numbers by @handwerkerd in #1014
- Fix Bibtex rendering in reports by @eurunuela @martaarbizu and @martinezeguiluz in #1001
- Fixed Inappropriate Logical Expression by @fazledyn-or in #1012
- Document decision tree steps in report and remove log_extra_report by @handwerkerd and @tsalo in #1043
- Fixing ica_reclassify CLI documentation typo by @handwerkerd in #1010
- [DOC] Replaced newsletter with Google Groups by @goodalse2019 in #1017
- Reformat with new version of black by @tsalo in #1020
- Add Dependabot config and switch Release Drafter to Changelog config by @tsalo in #1024
- Fix error on empty input file & pre-commit updated to black version 24.2.0 by @martinezeguiluz and @handwerkerd in #1031
- docs: add effigies as a contributor for code by @allcontributors in #1004
- docs: add bpinsard as a contributor for code by @allcontributors in #1003
- docs: add goodalse2019 as a contributor for doc, ideas, and question by @allcontributors in #1018
- docs: add martinezeguiluz as a contributor for code by @allcontributors in #1032
- docs: add martaarbizu as a contributor for code by @allcontributors in #1039
New Contributors
- @bpinsard made their first contribution in #1003
- @fazledyn-or made their first contribution in #1012
- @goodalse2019 made their first contribution in #1017
- @dependabot made their first contribution in #1025
- @martinezeguiluz made their first contribution in #1032
- @martaarbizu made their first contribution in #1039
Full Changelog: 23.0.2...24.0.0
23.0.2
Summary
These changes includes a lot of documentation updates, logging of python and software versions in tedana_report
(#747), fixing a bug where one could not specify PCA variance explained from the command line interface (#950), stricter code style rules along with pre-commits, cleaning up code in several places including several places where we were unnecessarily using old versions of python modules (#998), and updating to allow tedana
to run with python version 3.12 (#999)
What's Changed
- Add recommendations for distortion correction by @tsalo in #896
- tedpca CLI fix by @handwerkerd in #950
- logging MAPCA subsampling by @handwerkerd in #951
- docs: add handwerkerd as a contributor for code by @allcontributors in #954
- [FIX] Styling issues and make RTD build work by @eurunuela in #968
- Remove RefLGR from code and refs field from decision trees by @tsalo in #965
- Add pre-commit to automatically fix style issue before pushing commits by @eurunuela in #973
- Add section to reports that show system info, tedana call and version by @eurunuela in #747
- Multiple documentation updates by @handwerkerd in #948
- Add .pre-commit-config.yaml by @eurunuela in #983
- Add prefix to all output files by @tsalo in #963
- Make it clearer in CONTRIBUTING.md how devs can make pre-commit work by @eurunuela in #985
- Lint codebase with additional style restrictions by @tsalo in #970
- remove unecessary copy of large data by @bpinsard in #995
- MNT: Uncap dependencies by @effigies in #998
- MNT: Test on 3.11 and 3.12 by @effigies in #999
New Contributors
Full Changelog: 23.0.1...23.0.2
23.0.1
Release Notes
Most of these changes were made for v23.0.0, but the package did not build for pip so the descriptive release notes are stored with this version.
This release changes many internal aspects of the code, will make future improvements easier, and will hopefully make it easier for more people to understand their results and contribute. The denoising results should be identical. Right before releasing this new version, we released version 0.0.13, which is the last version of the older code.
User-facing changes
- Breaking change:
tedana
can no longer be used to manually change component classifications. A separate program,ica_reclassify
, can be used for this. This makes it easier for programs like Rica to output a list of component numbers to change and to then change them withica_reclassify
. Internally a massive portion of thetedana
workflow code was a mess of conditional statements that were designed just so that this functionality could be retained within tedana. By separating outica_reclassify
thetedana
code is more comprehensible and adaptable. - Breaking change: No components are classified as
ignored
.Ignored
has long confused users. It was intended to identify components with such low variation that it was not worth deciding whether to lose a statistical degree of freedom by rejecting them. They were treated identically toaccepted
components. Now they are classified asaccepted
and tagged asLow variance
orBorderline Accept
. Thisclassification_tag
now appears on the html report of the results and the component table file. - Breaking change: In the component table file
classification_tag
has replacedrationale
. Since the tags use words and one can assign more than one tag to each component, these are both more informative and more flexible than the olderrationale
numerical codes. - It is now possible to select different decision trees for component selection using the
--tree
option. The default tree iskundu
and that should replicate the current outputs. We also includeminimal
which is a simpler tree that is intended to provide more consistent results across a study, but needs more testing and validation and may still change. Flow charts for these two options are here. - Anyone can create their own decision tree. If one is using metrics that are already calculated, like
kappa
andrho
, and doing greater/less than comparisons, one can make a decision tree with a user-provided json file and the--tree
option. More complex calculations might require editing the tedana python code. This change also means any metric that has one value per component can be used in a selection process. This makes it possible to combine the multi-echo metrics used in tedana with other selection metrics, such as correlations to head motion. The documentation includes instructions on building and understanding this component selection process. - Additional files are saved which store key internal calculations and what steps changed the accept vs reject classifications for each component. The documentation includes descriptions of the newly outputted files and file contents. These includes:
- A registry of all files outputted by tedana. This allows for multiple file naming methods and means internal and external programs that want to interact with the tedana outputs just need to load this file.
- A file of all the metrics calculated across components, such as the
kappa
andrho
elbow thresholds - A decision tree file which records the exact decision tree that was run on the data and includes metrics calculated and component classifications changed in each step of the process
- A component status table that is summarizes each components classification at each step of the decision tree
Under-the-hood changes
- The component classification process that designates components as “accepted” or “rejected” was completely rewritten so that every step in the process is modular and the inputs and outputs of every step are logged.
- Moved towards using the terminology of “Component Selection” rather than “Decision Tree” to refer to the code that’s part of the selection process. “Decision Tree” is used to more specifically to refer to the steps to classify components.
ComponentSelector
object created to include common elements from the selection process including the component_table and information about what happens along every step of the decision tree. Additional information that will be stored inComponentSelector
and saved in files (as described above) includescomponent_table
,cross_component_metrics
,component_status_table
, andtree
- The new class is defined in
./selection/component_selector.py
, the functions that define each node of a decision tree are in./section/selection_nodes.py
and some key common functions used by selection_nodes are in./selection/selection_utils.py
- By convention, functions in selection_nodes.py that can change component classifications, begin with
dec_
for decision and functions that calculate cross_component_metrics begin withcalc_
- A key function in selection_nodes.py is
dec_left_op_right
which can be used to change classifications based on the intersection of 1-3 boolean statements. This means most of the decision tree is modular functions that calculate cross_component_metrics and then tests of boolean conditional statements.
- When defining a decision tree a list of
necessary_metrics
are required and, when a tree is executed, theused_metrics
are saved. This information is both a good internal check and can potentially be used to calculate metrics as defined in atree
rather than separately specifying the metrics to calculate and the tree to use. io.py
is now used to output a registry (default isdesc-tedana_registry.json
) and can be used by other programs to read in files generated bytedana
(i.e. Load the optimcally combined time series and ICA mixing matrix from the output of tedana rather than needing to input the names of each file separately)- Some terminology changes, such as using
component_table
instead ofcomptable
in code - integration tests now store testing data in
.testing_data_cache
and only download data if the data on OSF was updated more recently than the local data. - Nearly 100% of the new code and 98% of all tedana code is covered by integration testing.
- Tedana python package management now uses pyproject.toml
- Possible breaking change Minimum python version is now 3.8 and minimum pandas version is now 2.0 (might cause problems if the same python environment is used for packages that require older versions of pandas)
Changes
- [REF] Decision Tree Modularization @jbteves @handwerkerd @n-reddy @marco7877 @tsalo in #756
- Update python-publish.yml by @tsalo in #945
Full Changelog: 0.0.13...23.0.1
23.0.1rc0
Release Notes
Version 23.0.0 was released, but did not build correctly for pip. This release fixed that issue.
What's Changed
Full Changelog: 23.0.0...23.0.1rc0
23.0.0
Release Notes
This release changes many internal aspects of the code, but there was a small bug that prevented it from building for pip
. Since this version cannot be installed through pip
the descriptive release notes are included with Version 23.0.1
Changes
- [REF] Decision Tree Modularization (#756) @jbteves @handwerkerd @n-reddy @marco7877 @tsalo
Full Changelog: 0.0.13...23.0.0
0.0.13
Release Notes
This is the last release before refactoring of large portions of the code.
Breaking Changes
- Corrected a bug where the component classification process should have calculated a threshold on a sorted list of component variances, but it was calculated on an unsorted list. (#938)
- In v0.0.12 we changed the default method for selecting the number of components from MDL to AIC, but later realized this was only implemented when run through the python API, but not from the command line. Now AIC is the default for both. (#877)
Additional changes to highlight
- Optimization curves and additional info for the PCA dimensionality reduction step are saved to help users and developers identify problems with the step that identifies the number of components to use (#839)
- Added Python 3.10 compatibility (#818)
- Using BibTeX instead of duecredit for listing references so warnings from users not having duecredit installed will finally be gone (#875)
- Tedana python package management now uses setup.cfg (#874)
All changes since last stable release
- Generalize installation instructions to work with Windows in CONTRIBUTING by @aryangupta701 in #846
- [MAINT] Switch to setup.cfg-based configuration by @tsalo in #874
- [FIX] Add function to prep data for JSON serialization by @jbteves in #859
- [ENH, FIX] PCA variance enhancements and consistency improvements by @handwerkerd in #877
- Print optimal number of maPCA components and plot optimization curves by @eurunuela in #839
- [DOC] Add information about using tedana with fMRIPrep v21.0.0 by @tsalo in #847
- [REF] Replace duecredit with BibTeX by @tsalo in #875
- Update CONTRIBUTING.md by @jbteves in #885
- [REF] Suppresses divide by 0 warning by @jbteves in #786
- Add links to several multi-echo datasets by @tsalo in #895
- Add F-T2 and F-S0 maps to verbose outputs by @tsalo in #893
- [MAINT] Add 3.10 unit test, compatibilities in setup.cfg by @jbteves in #818
- [FIX] Use capital names in desc-ICAOrth_mixing.tsv columns by @pablosmig in #906
- [DOC] Add documentation page on denoising approaches by @tsalo in #823
- docs: add giadaan as a contributor for doc by @allcontributors in #916
- Add Nashiro dataset to documentation by @Kasambx in #912
- Fix example code in denoising documentation by @tsalo in #917
- Sorting varex for decision tree criterion I011 by @handwerkerd in #924
- [DOC] Remove Josh as maintainer by @jbteves in #928
- add pandas version check >= 1.5.2 and mod behavior by @pmolfese in #938
New Contributors
- @aryangupta701 made their first contribution in #846
- @pablosmig made their first contribution in #906
- @Kasambx made their first contribution in #912
- @pmolfese made their first contribution in #938
Full Changelog: 0.0.12...0.0.13
0.0.12
Summary
This would ordinarily not have been released, but an issue with one of our dependencies means that people cannot install tedana right now. The most notable change (which will potentially change your results!) is that PCA is now defaulting to the "aic" criterion rather than the "mdl" criterion.
What's Changed
- [DOC] Add JOSS badges by @tsalo in #815
- [FIX] Fixes broken component figures in report when there are more than 99 components by @manfredg in #824
- [DOC] Add manfredg as a contributor for code by @allcontributors in #825
- DOC: Use RST link for ME-ICA by @effigies in #832
- [DOC] Fixing a bunch of warnings & rendering issues in the documentation by @handwerkerd in #840
- [DOC] Replace mentions of Gitter with Mattermost by @tsalo in #842
- [FIX] The rationale column of comptable gets updated when no manacc is given by @eurunuela in #855
- Made AIC the default maPCA option by @eurunuela in #849
- [DOC] Improve logging of component table-based manual classification by @tsalo in #852
- [FIX] Add jinja2 version pin as workaround by @jbteves in #870
New Contributors
Full Changelog: 0.0.11...0.0.12
0.0.11
Release Notes
Tedana's 0.0.11 release includes a number of bug fixes and enhancements, and it's associated with publication of our Journal of Open Source Software (JOSS) paper! Beyond the JOSS paper, two major changes in this release are (1) outputs from the tedana
and t2smap
workflows are now BIDS compatible, and (2) we have overhauled how masking is performed in the tedana
workflow, so that improved brain coverage is retained in the denoised data, while the necessary requirements for component classification are met.
🔧 Breaking changes
- The
tedana
andt2smap
workflows now generate BIDS-compatible outputs, both in terms of file formats and file names. - Within the
tedana
workflow, T2* estimation, optimal combination, and denoising are performed on a more liberal brain mask, while TE-dependence and component classification are performed on a reduced version of the mask, in order to retain the increased coverage made possible with multi-echo EPI. - When running
tedana
on a user-provided mixing matrix, the order and signs of the components are no longer modified. This will not affect classification or the interactive reports, but the mixing matrix will be different.
✨ Enhancements
tedana
interactive reports now include carpet plots.- The organization of the documentation site has been overhauled to be easier to navigate.
- We have added documentation about how to use
tedana
with fMRIPrep, along with a gist that should work on current versions of fMRIPrep. - Metric calculation is now more modular, which will make it easier to debug and apply in other classification decision trees.
🐛 Bug fixes
- One component was not rendering in interactive reports, but this is fixed now.
- Inputs are now validated to ensure that multi-file inputs are not interpreted as single z-concatenated files.
Changes since last stable release
- [JOSS] Add accepted JOSS manuscript to
main
(#813) @tsalo - [FIX] Check data type in
io.load_data
(#802) @tsalo - [DOC] Fix link to developer guidelines in README (#797) @tsalo
- [FIX] Figures of components with index 0 get rendered now (#793) @eurunuela
- [DOC] Adds NIMH CMN video (#792) @jbteves
- [STY] Use
black
andisort
to manage library code style (#758) @tsalo - [DOC] Generalize preprocessing recommendations (#769) @tsalo
- [DOC] Add fMRIPrep collection information to FAQ (#773) @tsalo
- [DOC] Add link to EuskalIBUR dataset in documentation (#780) @tsalo
- [FIX] Add resources folder to package data (#772) @tsalo
- [ENH] Use different masking thresholds for denoising and classification (#736) @tsalo
- [DOC, MAINT] Updated dependency version numbers (#763) @handwerkerd
- [REF] Move logger management to new functions (#750) @tsalo
- [FIX] Ignore non-significant kappa elbow when no non-significant kappa values exist (#760) @tsalo
- [ENH] Coerce images to 32-bit (#759) @jbteves
- [ENH] Add carpet plot to outputs (#696) @tsalo
- [FIX] Correct manacc documentation and check for associated inputs (#754) @tsalo
- [DOC] Reorganize documentation (#740) @tsalo
- [REF] Do not modify mixing matrix with sign-flipping (#749) @tsalo
- [REF] Eliminate component sorting from metric calculation (#741) @tsalo
- [FIX] Update apt in CircleCI (#746) @notZaki
- [DOC] Update resource page with dataset and link to Dash app visualizations (#745) @jsheunis
- [DOC] Clarify communication pathways (#742) @tsalo
- [FIX] Disable report logging during ICA restart loop (#743) @tsalo
- [REF] Replace metric dependency dictionaries with json file (#739) @tsalo
- [FIX] Add references back into the HTML report (#737) @tsalo
- [ENH] Allows iterative clustering (#732) @jbteves
- [REF] Modularize metric calculation (#591) @tsalo
- Rename sphinx functions to fix building error for docs (#727) @eurunuela
- [ENH] Generate BIDS Derivatives-compatible outputs (#691) @tsalo