Skip to content
forked from apple/corenet

CoreNet: A library for training deep neural networks

License

Notifications You must be signed in to change notification settings

feimao91/corenet

 
 

Repository files navigation

CoreNet: A library for training deep neural networks

CoreNet is a deep neural network toolkit that allows researchers and engineers to train standard and novel small and large-scale models for variety of tasks, including foundation models (e.g., CLIP and LLM), object classification, object detection, and semantic segmentation.

Table of contents

What's new?

  • April 2024: Version 0.1.0 of the CoreNet library includes
    • OpenELM
    • CatLIP
    • MLX examples

Research efforts at Apple using CoreNet

Below is the list of publications from Apple that uses CoreNet. Also, training and evaluation recipes, as well as links to pre-trained models, can be found inside the projects folder. Please refer to it for further details.

Installation

You will need Git LFS (instructions below) to run tests and Jupyter notebooks (instructions) in this repository, and to contribute to it so we recommend that you install and activate it first.

On Linux we recommend to use Python 3.10+ and PyTorch (version >= v2.1.0), on macOS system Python 3.9+ should be sufficient.

Note that the optional dependencies listed below are required if you'd like to make contributions and/or run tests.

For Linux (substitute apt for your package manager):

sudo apt install git-lfs

git clone [email protected]:apple/corenet.git
cd corenet
git lfs install
git lfs pull
# The following venv command is optional, but recommended. Alternatively, you can create and activate a conda environment.
python3 -m venv venv && source venv/bin/activate
python3 -m pip install --editable .

To install optional dependencies for audio and video processing:

sudo apt install libsox-dev ffmpeg

For macOS, assuming you use Homebrew:

brew install git-lfs

git clone [email protected]:apple/corenet.git
cd corenet
cd \$(pwd -P)  # See the note below.
git lfs install
git lfs pull
# The following venv command is optional, but recommended. Alternatively, you can create and activate a conda environment.
python3 -m venv venv && source venv/bin/activate
python3 -m pip install --editable .

To install optional dependencies for audio and video processing:

brew install sox ffmpeg

Note that on macOS the file system is case insensitive, and case sensitivity can cause issues with Git. You should access the repository on disk as if the path were case sensitive, i.e. with the same capitalization as you see when you list the directories ls. You can switch to such a path with the cd $(pwd -P) command.

Directory Structure

This section provides quick access and a brief description for important CoreNet directories.

Description Quick Access

Getting Started

Working with the examples is an easy way to get started with CoreNet.
└── tutorials
    ├── train_a_new_model_on_a_new_dataset_from_scratch.ipynb
    ├── guide_slurm_and_multi_node_training.md
    ├── clip.ipynb
    ├── semantic_segmentation.ipynb
    └── object_detection.ipynb

Training Recipes

CoreNet provides reproducible training recipes, in addition to the pretrained model weights and checkpoints for the publications that are listed in projects/ directory.

Publication project directories generally contain the following contents:

  • README.md provides documentation, links to the pretrained weights, and citations.
  • <task_name>/<model_name>.yaml provides configuration for reproducing the trainings and evaluations.
└── projects
    ├── byteformer
    ├── catlip (*)
    ├── clip
    ├── fastvit
    ├── mobilenet_v1
    ├── mobilenet_v2
    ├── mobilenet_v3
    ├── mobileone
    ├── mobilevit
    ├── mobilevit_v2
    ├── openelm (*)
    ├── range_augment
    ├── resnet
    └── vit

(*) Newly released.

MLX Examples

MLX examples demonstrate how to run CoreNet models efficiently on Apple Silicon. Please find further information in the README.md file within the corresponding example directory.
└──mlx_example
    ├── clip
    └── open_elm

Model Implementations

Models are organized by tasks (e.g. "classification"). You can find all model implementations for each task in the corresponding task folder.

Each model class is decorated by a @MODEL_REGISTRY.register(name="<model_name>", type="<task_name>") decorator. To use a model class in CoreNet training or evaluation, assign moels.<task_name>.name = <model_name> in the YAML configuration.

└── corenet
    └── modeling
        └── models
            ├── audio_classification
            ├── classification
            ├── detection
            ├── language_modeling
            ├── multi_modal_img_text
            └── segmentation

Datasets

Similarly to the models, datasets are also categorized by tasks.
└── corenet
    └── data
        └── datasets
            ├── audio_classification
            ├── classification
            ├── detection
            ├── language_modeling
            ├── multi_modal_img_text
            └── segmentation

Other key directories

In this section, we have highlighted the rest of the key directories that implement classes corresponding to the names that are referenced in the YAML configurations.
└── corenet
    ├── loss_fn
    ├── metrics
    ├── optims
    │   └── scheduler
    ├── train_eval_pipelines
    ├── data
    │   ├── collate_fns
    │   ├── sampler
    │   ├── text_tokenizer
    │   ├── transforms
    │   └── video_reader
    └── modeling
        ├── layers
        ├── modules
        ├── neural_augmentor
        └── text_encoders

Maintainers

This code is developed by Sachin, and is now maintained by Sachin, Maxwell Horton, Mohammad Sekhavat, and Yanzi Jin.

Previous Maintainers

Contributing to CoreNet

We welcome PRs from the community! You can find information about contributing to CoreNet in our contributing document.

Please remember to follow our Code of Conduct.

License

For license details, see LICENSE.

Relationship with CVNets

CoreNet evolved from CVNets, to encompass a broader range of applications beyond computer vision. Its expansion facilitated the training of foundational models, including LLMs.

Citation

If you find our work useful, please cite the following paper:

@inproceedings{mehta2022cvnets, 
     author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad}, 
     title = {CVNets: High Performance Library for Computer Vision}, 
     year = {2022}, 
     booktitle = {Proceedings of the 30th ACM International Conference on Multimedia}, 
     series = {MM '22} 
}

About

CoreNet: A library for training deep neural networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Other 0.3%