A workflow manager for curating MRI and tabular data and standardized processing.
Pull-Organize-Process-Push-Yey!
Note: This is a nimhans-YLO dataset-specifc fork
- Project
- Create a project dir on your local machine:
mkdir /home/<user>/projects/<my_project>
- Create
containers
,code
,data
dirs inside your project dir.
- Create a project dir on your local machine:
- Containers (Singulaity)
- Install Singularity
- Download containers (e.g. Heudiconv) for the pipelines used in this workflow inside the
containers
dir. For example:cd /home/<user>/projects/<my_project>/containers
singularity pull docker://nipy/heudiconv:0.13.1
singularity pull docker://nipreps/fmriprep:23.1.3
- Code
- Change dir to
code
:cd /home/<user>/projects/<my_project>/code/
- Create a new venv:
python3 -m venv nipoppy_env
- Activate your env:
source nipoppy_env/bin/activate
- Clone this repo:
git clone https://github.com/neurodatascience/nipoppy-nimhans_YLO.git
- Install python dependencies:
pip install -e .
- Change dir to
- Data
- Create nipoppy dataset-tree:
tree.py /home/<user>/projects/<my_project>/data/<study_name>
- Create and populate
<study_name>/proc/global_configs.json
- Copy your participant-level dicom dirs (e.g.
MNI001
,MNI002
...) into<study_name>/scratch/raw_dicom/
- Create nipoppy dataset-tree:
- RedCap: configure RedCap access. The RedCap query will be used to generate manifest and "bagels" for tabular data dashboard.
- Create and save the query in
<study_name>/proc/.redcap.json
- See sample redcap.json:
workflow/tabular/sample_redcap.json
- See sample redcap.json:
- Create and save the query in
- Change dir to
nipoppy code
:cd /home/<user>/projects/<my_project>/code/nipoppy
- Activate your env:
source nipoppy/bin/activate
(if not already active) - Run nipoppy:
python nipoppy.py --global_config <> --session_id <> --n_jobs <>
- This will run "workflows" listed inside the
global_configs.json
. Currently we are supporting "generate_manifest", "dicom_org", "bids_conv" workflows. - The "generate_manifest" workflow will update the current list of participants and also generate
bagels
from the RedCap report.
- This will run "workflows" listed inside the
- Dashboard: Upload the bagel i.e.
<study_name>/tabular/bagel.csv
file to the Dashboard to visualize and query current clinical data. - Run processing pipelines (e.g. mriqc, fmriprep, tractoflow): Currently this is setup to run either a single subject locally or batch-jobs on a cluster. Support for local batch-jobs would be added soon.
<study_name>/dicom
: Participant-level dirs with symlinks to the dicom files in the raw_dicom dir- Note: dicoms that are unreadable or contain derived (i.e. scanner processed) scans will be skipped and enlisted in the
<study_name>/scratch/logs
- Note: dicoms that are unreadable or contain derived (i.e. scanner processed) scans will be skipped and enlisted in the
<study_name>/bids
: BIDS dataset comprising all the modalities in Nifti format (i.e. nii.gz and sidecar json)<study_name>/derivatives/<proc_pipe>
: Output from processing pipelines.