This project, conducted in collaboration between Eindhoven University of Technology (TU/e) and Philips, is focused on developing an AI solution for the Image Guided Therapy Challenge on Transarterial Chemoembolization (TACE) procedures. The goal of this project is to enhance the efficiency and accuracy of TACE procedures using advanced AI techniques.
This methodology utilizes supervised learning to train a model. The model maps unenhanced Digital Direct Radiography (DDR) images to vessel-enhanced DRR images. This mapping is achieved by leveraging a 3D vessel network's latent representation. This novel approach offers a way to reduce contrast agent usage while maintaining high visibility of the vessel network even in deforming volumes.
This project is developed using Python and pip for package management. Ensure you have the following installed on your system:
- Python 3.7 or higher
src/
: Contains the source code for the project, including the Streamlit app and AI model training.test/
: Contains unit tests for the project.models/
: Directory for storing AI model files.requirements.txt
: Lists all Python dependencies required for the project.
This project uses publicly available datasets provided by The Cancer Imaging Archive (TCIA). To access these datasets, please head to the corresponding webpage TCIA TACE Dataset.
To set up a development environment for this project, follow these steps:
- Clone the repository to your local machine:
git clone https://github.com/LucianoDeben/5ARIP10-ITP-T3G3.git
- Navigate to the project directory.
- Create a virtual environment:
python -m venv env
orconda create --name env
. - Activate the virtual environment:
source env/bin/activate
(Linux/macOS) or.\env\Scripts\activate
(Windows) orconda activate myenv
(Conda). - Install the required packages:
pip install -r requirements.txt
After setting up the project, you can run the Streamlit demo application in src
:
cd src
streamlit run app.py
To run the unit test of the libary use:
cd test
python -m unittest discover -k 'test.test_*.py'
This project is licensed under the MIT License - see the LICENSE file for details.
We would like to thank the following resources and individuals:
- Our project mentor, Danny Ruijters, for their guidance.
- Philips, for their collaboration and support in this project.
- Gopalakrishnan, Vivek, and Golland, Polina for their work on fast auto-differentiable digitally reconstructed radiographs. We utilized their
DiffDRR
: Auto-differentiable DRR rendering and optimization in PyTorch in our project.
This project was developed by:
- Crapels, Dhani - [email protected]
- Deben, Luciano - [email protected]
- Bierenbroodspot, Sven - [email protected]