Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Bring code to this repository for better reproducibility (#16)
* support n_neutral_labels and n_neutral_labels * Rester all subjects to PAM50 * fix bugs related to subject subfolders * ⚙️chore(reg2pam50) resample image and segmentation before processing slices + fix naming * add script to get spine-generic data via git-annex * copy -u instead of -n+check for warp_template2anat * add parallel processing * add seg_manual_fix_3d_slicer.py script * Multiple updates * Add color map for 3D Slicer * update seg_manual_fix_3d_slicer script - Refactored seg_manual_fix_3d_slicer.py script - Added support for opening seg-manual file if it exists - Renamed existing segmentation file to .bkp.nii.gz if it already exists - Added functionality to create json sidecar with meta information - Renamed existing json sidecar file to .bkp.json if it already exists * modify load_files function accept an optional seg_suffix parameter * add remove_warp_outliers script * add script to convert .mha to .nii.gz * add script to map segmentation labels * Add spider_labels_map.json * add generate_seg_jpg_nnunet.py * update default paths and folder names * update git-annex installation command * add make_nnunet_dataset.py * change default values of data folder and PAM50 segmentation file * add mrspineseg_labels_map.json * add nnunet_labels_map.json * add support for customizing output suffix * fix T12-L1 IVD from 207 to 42 * remove default compression level * rename output directory * skip when the output files already exist. fixes skip when no T12-L1 IVD. * Save combined JPG images from NIfTI imag and seg * Generalize and multithread map_labels script * add fix_csf_label script * Add script to generate augmented images + segs * Add script to generate segmented labels from initial segmentation * Refactor generate_seg_jpg_nnunet.py script to improve readability and maintainability * Mmove script for generating segment JPG images * Some scripts for: Random disc labels for training data. Edit seg background. "generate_random_disc_labels.py" "generate_random_disc_labels_channels.py" "put_in_background.py" * Remove some non working startegies for training * Add nnUNet testing and training scripts * fix bug in RandomNoise calculation. remove non-used imports default _0000 for images * set default number of generated to 7. change augmentation randomness * Refactor input/output folder definitions in script * Set default image suffix to '_0000' in segmentation script * Remove outdated scripts * Enhance mha2nii for Bulk Conversion with Parallelism * Repo reorganization * Create python package totalsegmri * changes to support python package * Updated .gitignore to exclude .vscode directory * fix label_map paths * fix bug in image cropping * Added tqdm to requirements * Update README.md - add (private dataset) to whole-spine Co-authored-by: Nathan Molinier <[email protected]> * Update README for data path and script execution * Update SPIDER dataset path in preparation script * Enhance verbose logging with script names * Enhance training script output verbosity * convert crlf to lf * Added argparse dependency to dirpath utility * Balanced dataset by duplicating instances * Updated README and scripts for nnUNetv2 integration - Separated dataset preparation and training instructions in the README for clarity, specifying use of nnUNetv2 structure. - Clarified the prerequisite of having a trained model before running inferences in the README. - Fixed the output directory variable assignment in inference script to correctly use a second parameter. - Enhanced the inference script to handle missing `_0000` suffixes and to support new postprocessing steps. - Added a new dataset preparation script (`prepare_nnunet_datasets.sh`) to set up data structure for nnUNetv2. - Removed dataset preparation steps from the training script (`train_nnunet.sh`), focusing it solely on model training as per the new separation of concerns. The changes improve the accuracy, usability, and maintainability of TotalSegMRI's implementation with nnUNetv2, facilitating better segmentation results and a smoother experience for users following the updated instructions. * Update README.md * Refactor argument flags in MRI utils scripts Updated the argument flags across various MRI utility scripts to standardize input directory flags as '-s' instead of the previous '-i'. This change enhances the consistency of script interfaces, making it easier to understand and use the tools for MRI data preparation and processing. Adjusted scripts include those for generating sequential labels, mapping labels, and fixing CSF labels. Compatibility with existing conventions in subject subdirectory flags has been maintained by switching from '-s' to '-u'. * Added script for processingNIfTI segmentation files and retains only the largest connected component for each label. * Ignore non-critical warnings * Update dataset prep and training workflow Refactored the dataset preparation steps, now sourcing the new `get_spine_generic_datasets.sh` script to fetch specific datasets from the Spine Generic Project repository. This update clarifies the preparation of SPIDER datasets in the BIDS structure. Revised training and inference instructions in `README.md` to correspond with the new dataset structure and included clear directives for model training and running inference. Removed unnecessary code from the `prepare_spider_bids_datasets.sh` script, further streamlining the process. These changes make the data setup more intuitive and maintainable, enabling easier replication of the research environment. Related to issue #4567. * Rename multi-subject and single-subject dataset zip files to included_PAM50_seg. * Refactor label generation utilities Removed redundant `pairs_dict` function from both generate_labels_sequential.py and generate_largest_labels.py as it was no longer used in the current codebase. Updated generate_largest_labels.py to enhance clarity: renamed the function generate_labels to generate_largest_labels and updated its references to match the new name, ensuring consistency with the module's purpose. Removed unused imports to streamline dependencies and maintain clean and efficient code. * Remove big binaries from main repository * Update README with mkdir flag enhancement Ensure the creation of nested directories for the SPIDER dataset by adding the '-p' flag to the 'mkdir' command in the README instructions. This prevents potential errors when users attempt to create subdirectories in a non-existent path. --------- Co-authored-by: Nathan Molinier <[email protected]> Co-authored-by: Nathan Molinier <[email protected]>
- Loading branch information