This template currently uses
Python 3.11
, and eitherconda
,docker
,poetry
, ormicromamba
Template project aims to promote versioning library, environment isolation practice and help all ML practitioners quickly start a project. Using this template, practitioners will have below libraries
- Pytorch
- Torch-geometric
- Transfomer
- Pytorch-Lightning
- Wandb
- Pandas
- Numpy
- Scikit-Learn
- Jupyter Notebook
- Panel
- Pytest
- DVC
Those libraries of course aren't enough, but it's easy to update other libraries that support your project.
Using
poetry
is highly recommended. If you are usingconda
ormicromamba
, make sure that you use package hashes to ensure package selection is reproducible viaconda-lock
ormicromamba
pip install poetry
- Create, install, activate environment
poetry install --with cpu # cpu
poetry install --with cu117 # cuda 11.7
poetry shell
- Need to update environment after
poetry add a_lib
poetry lock
Note: in case you have a problem, run
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
- Need to add source, e.g
pyg-cu117
poetry source add pyg-cu117 https://data.pyg.org/whl/torch-2.0.0+cu117.html
supposing add pyg_lib
, torch_scatter
, ... to a group (cu117
) in this project via the source
poetry add -G cu117 pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv --source pyg-cu117
- Install Conda instruction: conda.io
- Create, install, activate environment
conda env create -f environment.yml
conda activate ml-venv
- Need to update environment
conda env update --file binder/environment.yml --prune
- Export environment
conda env export --from-history -f binder/environment.yml
- This tutorial is for those who have NVIDIA GPU (hereafter GPU). For CPU case, this should be similar but need to adjust
Dockerfile
andrun_docker.sh
- Note: Docker will use
micromamba
instead ofminiconda
. Replaceconda
withmicromamba
in your usual commands - You need to install
docker
- Install Nvidia driver (ignore if you don't have GPU)
- Then install Nvidia docker container toolkit, ignore if you aim to use CPU.
- Edit
.env
locates the same level withrun_docker.sh
, to add environment variables to the prospective docker container - There is a file named
run_docker.sh
, allow to execute it bychmod +x run_docker.sh
and runrun_docker.sh
- Enjoy Jupyter lab at localhost:8888 as usual. Notebook token is shown after
run_docker.sh
runs successfully
try this in ipython
import torch
from torch_geometric.data import Data
edge_index = torch.tensor([[0, 1],
[1, 0],
[1, 2],
[2, 1]], dtype=torch.long)
x = torch.tensor([[-1], [0], [1]], dtype=torch.float)
data = Data(x=x, edge_index=edge_index.t().contiguous())
Use poetry env use
to select Python version, more details are at more details https://stackoverflow.com/questions/60580113/change-python-version-to-3-x
If you aim to use poetry
, the steps are following
- edit file
pyproject.toml
- select a Python version, then
poetry shell
- generate new
poetry.lock
by runpoetry lock
If you aim to follow conda
- edit file
environment.yml
- create a new environment that has the version you want
- switch to that environment
torch_geometric
with CPU version inpoetry
has a problem. I created a discussion at pyg-team/pytorch_geometric#7788