The official code for the paper "Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data"
In this project, we collect a large-scale medical multi-modal dataset, MedMD, with 16M 2D or 3D images. We train a new medical multi-modal generative model RadFM on it, enabling both 2D and 3D scans, multi-image input and visual-language interleaving cases.
All Datasets are released! We have updated the links in our dataset table. You can find all our text part data in https://huggingface.co/datasets/chaoyi-wu/RadFM_data_csv.
For decompressing the splited compression files in most cases, please check the following code in linux:
cat zip.z* > myzip.zip
unzip myzip.zip
For quick start, you can check the Quick_demo
path.
We demonstrate a simple diagnosis case here to show how to inference with our model.
Feel free to modify it as you want.
-
S1. Download Model checkpoint or form baiduyun (No need for decompressing).
-
S2. Decompress the original zip file, you can get a
pytorch_model.bin
. -
S3. put
pytorch_model.bin
under pathQuick_demo/
. -
S4. python
test.py
and you can get a conversation as:Input: Can you identify any visible signs of Cardiomegaly in the image?
Output: yes
By the way, never try to perform this in cpu and gpu is all you need :).
For re-training a model on our dataset or large-scale testing our pre-train model, you can check src
.
Simply, train.py
for training and test.py
for testing.
- Check the data_csv to get how different datasets are processed and download them into
src/Dataset/data_csv
- Modify the path as you disire, and check
src/train.py
to pre-train orsrc/train.py
to test.
Some cases produced by our final model:
MedKD Dataset downloading URL:
We sincerely thank all the contributors who uploaded the relevant data in our dataset online. We appreciate their willingness to make these valuable cases publicly available.
If you have any questions, please feel free to contact [email protected].