Releases: blakeblackshear/frigate
Beta Release 0.4.0
This release touched almost every line of code. I am certain there are changes I have missed. I see much better bounding boxes and higher confidence scores for detections on my cameras.
Breaking Changes:
- Object configuration has changed again. Please reference the updated config example.
- Region specific settings are no longer available because objects can be detected outside the bounds of regions dynamically
Changes:
- By default, frigate only looks for the following object types: person, car, and truck. You must specify others if you want.
- Detected objects are now assigned an id and tracked across frames
- Regions are dynamically created for tracked objects
- If an object is against the edge of a region, a new region is dynamically created to ensure the entire object is included in the detection
- Dockerfile has been overhauled. Building should take less time and result in a smaller image size. There is still room for more improvement.
- Updated to the latest EdgeTPU libraries from Google
- Added a
/debug/stats
endpoint where you can see FPS for your cameras/Coral and various queue lengths - Watchdog timeout for ffmpeg is now configurable
- Timestamp on snapshots is now configurable
- Support for UDP camera feeds
Docker image is available with docker pull blakeblackshear/frigate:0.4.0-beta
0.3.0 Release
Breaking Changes:
- Configuration file changes to support all objects in the model. See updated example.
- Images are now served up at
/<camera_name>/<object_name>/best.jpg
- MQTT messages are published to
<camera_name>/<object_name>
and<camera_name>/<object_name>/snapshot
Changes:
- Frigate now reports on every object type in the model. You can configure thresholds and min/max areas for each object type at a global, camera, or region level.
- Preview MJPEG feed is limited to 1FPS and now caches the jpg image in order to reduce bandwidth and cpu usage. Using the mpdecimate flag with ffmpeg has reduced the effective FPS of my cameras quite a bit, so it was re-encoding the same image often.
- Different object types now have different color bounding boxes. (inspiration from @pizzato)
Image is available with docker pull blakeblackshear/frigate:0.3.0
0.2.2 Release
Breaking Changes:
- The configuration file changed significantly. Make sure you update using the example before upgrading.
Changes:
- Added
max_person_area
to filter out detected persons that are too large to be real - Print the frame time on the image so you can see a timestamp on the last_person image
- Allow the mqtt client_id to be set so you can run multiple instances of frigate
- Added a basic health check endpoint
- Added
-vf mpdecimate
to default output args - Revamped ffmpeg args configuration with global defaults that can be overwritten per camera
- Updated docs
Image available on docker with docker pull blakeblackshear/frigate:0.2.2
0.2.2 Beta Release
Breaking Changes:
- The configuration file changed significantly. Make sure you update using the example before upgrading.
Changes:
- Added
max_person_area
to filter out detected persons that are too large to be real - Print the frame time on the image so you can see a timestamp on the last_person image
- Allow the mqtt client_id to be set so you can run multiple instances of frigate
- Added a basic health check endpoint
- Added
-vf mpdecimate
to default output args - Revamped ffmpeg args configuration with global defaults that can be overwritten per camera
- Updated docs
Image available on docker with docker pull blakeblackshear/frigate:0.2.2-beta
0.2.1 Release
- Push best person images over MQTT for more realtime updates in homeassistant
- Attempt to gracefully terminate the ffmpeg process before killing
- Tweak the default input params to discard corrupt frames and ignore timestamps in the video feed
- Increase the watchdog timeout to 10 seconds
- Allow
ffmpeg_input_args
,ffmpeg_output_args
, andffmpeg_log_level
to be passed in the config for customization
0.2.0 Release
- Video decoding is now done in an FFMPEG sub process which enables hardware accelerated decoding of video streams. For me, this reduced CPU usage for decoding by 60-70%. (Fixes #21)
- New
take_frame
option to reduce framerates with frigate when the camera doesnt support it (Fixes #40) - Tweaked the position of the labels to avoid overlapping with detected objects (Fixes #39)
- Added the area of the object to the label to help determine min_person_area values (thanks @aav7fl)
- Greatly reduced Docker image size, from ~2GB to 450MB
- Added support for custom Odroid-XU4 build (unfortunately, I wasn't able to get the Coral performance to be good enough for me with this board)
- Latest Coral drivers from Google
- Added a benchmarking script to test inference times
- Added some comments to better document config options (Fixes #46)
v0.2.0 Beta Release
I marked this a beta because it may break for other people's setups. The new image is available with docker pull blakeblackshear/frigate:0.2.0-beta
.
- Video decoding is now done in an FFMPEG sub process which enables hardware accelerated decoding of video streams. For me, this reduced CPU usage for decoding by 60-70%. (Fixes #21)
- New
take_frame
option to reduce framerates with frigate when the camera doesnt support it (Fixes #40) - Tweaked the position of the labels to avoid overlapping with detected objects (Fixes #39)
- Added the area of the object to the label to help determine min_person_area values (thanks @aav7fl)
- Greatly reduced Docker image size, from ~2GB to 450MB
- Added support for custom Odroid-XU4 build (unfortunately, I wasn't able to get the Coral performance to be good enough for me with this board)
- Latest Coral drivers from Google
- Added a benchmarking script to test inference times
- Added some comments to better document config options (Fixes #46)
Watchdog and thresholds
Features:
- Implement configurable thresholds per region
- Add a watchdog to detect silent failures when reading the RTSP stream
Fixes:
- Fix missing numpy import for default mask files
0.1.1 - Add masking for person detection
Adds masking support for detecting people. The mask file should be a BMP and the same size as the resolution of your camera stream. If the bottom middle of the bounding box is located on a black pixel, it will ignore it.
0.1.0 - Google Coral, Multiple Cameras, etc
This release adds the following:
- Google Coral support (required for now)
- Motion detection was removed because the overhead of always looking for objects is much lower than looking for motion (planning to add motion detection back in the future)
- Send ON/OFF instead of raw person score to reduce MQTT message frequency
- Switch to config file instead of environment variables
- Add support for multiple cameras (Google Coral can only be used by one process)
- Detection models bundled in container