diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 3c853b81..a5c62bce 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -19,7 +19,7 @@ jobs: strategy: max-parallel: 3 matrix: - python-version: [3.6, 3.7, 3.8, 3.9] + python-version: [3.7, 3.8, 3.9] steps: - uses: actions/checkout@v2 diff --git a/.isort.cfg b/.isort.cfg new file mode 100644 index 00000000..aa7e7666 --- /dev/null +++ b/.isort.cfg @@ -0,0 +1,3 @@ +[tool.isort] +profile = "black" +multi_line_output = 3 diff --git a/README.md b/README.md index 52e213d5..a001b406 100644 --- a/README.md +++ b/README.md @@ -20,12 +20,24 @@ Norfair is built, used and maintained by [Tryolabs](https://tryolabs.com). ## Installation -Norfair currently supports Python 3.6+. +Norfair currently supports Python 3.7+. + +For the pure Python version, install as: ```bash pip install norfair ``` +To make Norfair install the dependencies to support more features, install as: + +```bash +pip install norfair[video] # Adds several video helper features running on OpenCV +pip install norfair[metrics] # Supports running MOT metrics evaluation +pip install norfair[metrics,video] # Everything included +``` + +If the needed dependencies are already present in the system, installing the pure Python version of Norfair is enough for enabling the extra features. This is particuarly useful for embedded devices, where installing compiled dependencies can be difficult. + ## How it works Norfair works by estimating the future position of each point based on its past positions. It then tries to match these estimated positions with newly detected points provided by the detector. For this matching to occur, Norfair can rely on any distance function specified by the user of the library. Therefore, each object tracker can be made as simple or as complex as needed. @@ -114,4 +126,4 @@ If you are interested, please [contact us](https://tryolabs.com/#contact). ## License -Copyright © 2020, [Tryolabs](https://tryolabs.com). Released under the [BSD 3-Clause](LICENSE). +Copyright © 2021, [Tryolabs](https://tryolabs.com). Released under the [BSD 3-Clause](LICENSE). diff --git a/demos/alphapose/README.md b/demos/alphapose/README.md index fc2439a7..f019d3e5 100644 --- a/demos/alphapose/README.md +++ b/demos/alphapose/README.md @@ -4,8 +4,9 @@ An example of how to integrate Norfair into the video inference loop of a pre ex ## Instructions -1. [Follow the instructions](https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch#installation) to install the Pytorch version of AlphaPose. -2. Apply this diff to this [commit](https://github.com/MVIG-SJTU/AlphaPose/commit/ded84d450faf56227680f0527ff7e24ab7268754) on AlphaPose and use their [video_demo.py](https://github.com/MVIG-SJTU/AlphaPose/blob/ded84d450faf56227680f0527ff7e24ab7268754/video_demo.py) to process your video. +1. Install Norfair with `pip install norfair[video]`. +2. [Follow the instructions](https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch#installation) to install the Pytorch version of AlphaPose. +3. Apply this diff to this [commit](https://github.com/MVIG-SJTU/AlphaPose/commit/ded84d450faf56227680f0527ff7e24ab7268754) on AlphaPose and use their [video_demo.py](https://github.com/MVIG-SJTU/AlphaPose/blob/ded84d450faf56227680f0527ff7e24ab7268754/video_demo.py) to process your video. ```diff diff --git a/dataloader.py b/dataloader.py diff --git a/demos/detectron2/README.md b/demos/detectron2/README.md index 026bace4..46a97e43 100644 --- a/demos/detectron2/README.md +++ b/demos/detectron2/README.md @@ -6,8 +6,9 @@ Simplest possible example of tracking. Based on [Detectron2](https://github.com/ Assuming Norfair is installed: -1. [Follow the instructions](https://detectron2.readthedocs.io/tutorials/install.html) to install Detectron2. -2. Run `python detectron2_cars.py`. For the demo, we are using [this traffic footage](https://www.youtube.com/watch?v=aio9g9_xVio). +1. Install Norfair with `pip install norfair[video]`. +2. [Follow the instructions](https://detectron2.readthedocs.io/tutorials/install.html) to install Detectron2. +3. Run `python detectron2_cars.py`. For the demo, we are using [this traffic footage](https://www.youtube.com/watch?v=aio9g9_xVio). ## Explanation diff --git a/demos/motmetrics4norfair/README.md b/demos/motmetrics4norfair/README.md index 814defcc..a2066597 100644 --- a/demos/motmetrics4norfair/README.md +++ b/demos/motmetrics4norfair/README.md @@ -1,24 +1,28 @@ # Compute MOTChallenge metrics + Demo on how to evaluate a Norfair tracker on the [MOTChallenge](https://motchallenge.net). ## Instructions -1. Download the [MOT17](https://motchallenge.net/data/MOT17/) dataset running +1. Install Norfair with `pip install norfair[metrics,video]`. +2. Download the [MOT17](https://motchallenge.net/data/MOT17/) dataset running: + + ```bash + curl -O https://motchallenge.net/data/MOT17.zip # To download Detections + Ground Truth + Images (5.5GB) + unzip MOT17.zip + ``` + + or -```bash -curl -O https://motchallenge.net/data/MOT17.zip # To download Detections + Ground Truth + Images (5.5GB) -unzip MOT17.zip - ``` -or + ```bash + curl -O https://motchallenge.net/data/MOT17Labels.zip # To download Detections + Ground Truth (9.7 MB) + unzip MOT17Labels.zip + ``` -```bash -curl -O https://motchallenge.net/data/MOT17Labels.zip # To download Detections + Ground Truth (9.7 MB) -unzip MOT17Labels.zip -``` + Given that the ground truth files for the testing set are not publicly available, you will only be able to use motmetrics4norfair with the training set. -Given that the ground truth files for the testing set are not publicly available, you will only be able to use motmetrics4norfair with the training set. +3. Display the instructions: -2. Display the motmetrics4norfair instructions: -```bash - python motmetrics4norfair.py --help -``` \ No newline at end of file + ```bash + python motmetrics4norfair.py --help + ``` diff --git a/demos/motmetrics4norfair/motmetrics4norfair.py b/demos/motmetrics4norfair/motmetrics4norfair.py index 1c0bcd0b..40dabaac 100644 --- a/demos/motmetrics4norfair/motmetrics4norfair.py +++ b/demos/motmetrics4norfair/motmetrics4norfair.py @@ -1,8 +1,9 @@ +import argparse import os.path + import numpy as np -import norfair -from norfair import Detection, Tracker, metrics, video, drawing -import argparse + +from norfair import Tracker, drawing, metrics, video frame_skip_period = 1 detection_threshold = 0.01 diff --git a/demos/openpose/README.md b/demos/openpose/README.md index 51ff7be7..2439c7a5 100644 --- a/demos/openpose/README.md +++ b/demos/openpose/README.md @@ -4,8 +4,9 @@ Demo for extrapolating detections through skipped frames. Based on [OpenPose](ht ## Instructions -1. Install [OpenPose version 1.4](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.4.0). -2. Run `python openpose_extrapolation.py`. +1. Install Norfair with `pip install norfair[video]`. +2. Install [OpenPose version 1.4](https://github.com/CMU-Perceptual-Computing-Lab/openpose/releases/tag/v1.4.0). +3. Run `python openpose_extrapolation.py`. ## Explanation diff --git a/demos/yolov4/README.md b/demos/yolov4/README.md index cbcd7e61..5c1969f4 100644 --- a/demos/yolov4/README.md +++ b/demos/yolov4/README.md @@ -4,12 +4,12 @@ Simplest possible example of tracking. Based on [pytorch YOLOv4](https://github. ## Instructions -1. Clone [pytorch YOLOv4](https://github.com/Tianxiaomo/pytorch-YOLOv4/tree/master) and download the [weights](https://drive.google.com/open?id=1wv_LiFeCRYwtpkqREPeI13-gPELBDwuJ) published on the repo into your local clone of the repo. -2. Copy `yolov4demo.py` into your local clone of the repo and run `python yolov4demo.py