Skip to content
This repository has been archived by the owner on Jul 10, 2023. It is now read-only.

v0.7.2-beta

Pre-release
Pre-release
Compare
Choose a tag to compare
@whbruce whbruce released this 10 Jun 01:25
c2ece52

Intel® Deep Learning Streamer Pipeline Server Release v0.7.2

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Server, formerly known as Video Analytics Serving, is a python package and microservice for deploying hardware optimized media analytics pipelines. It supports pipelines defined in GStreamer* or FFmpeg* media frameworks and provides APIs to discover, start, stop, customize and monitor pipeline execution. Intel® DL Streamer Pipeline Server is based on Intel® DL Streamer Pipeline Framework and FFmpeg Video Analytics.

What's Changed

Title Description
Python package and module names Package name changes
  • vaserving -> server
  • vaclient -> client
Module name changes
  • vaserving.py -> pipeline_server.py
  • vaclient.py -> pipeline_client.py
Applications that use pipeline server Python modules directly must be updated to use new names.

What's New

Title Description
OpenVINO 2022.1 support Now using intel/dlstreamer:2022.1.0-ubuntu20 as base image.
Deployment time pipeline configuration Pipeline parameter default value can be set by environment variable using syntax:
"default": "{env[DETECTION_DEVICE]}"
This is particularly useful with Kubernetes deployments or with Docker Compose.
GPU support for Kubernetes By using deployment time pipeline configuration the Kubernetes sample now automatically runs inference on GPU if accelerator is available.
WebRTC support Added WebRTC as a frame destination.
Extended inference device support Added support for HETERO, MULTI and AUTO devices
More reference models and pipelines Added person and vehicle specific pipelines and models for improved accuracy

What's Fixed

Description Issue
Some public models from Open Model Zoo do not produce inference results #89
When interrupting run of multiple streams pipeline client prints fps of last stream not average #106
GPU inference fails on 12th Gen Intel® Core™ systems #108
Memory leak on pipeline stop #112

Known Issues

Known issues can be found as GitHub issues. If you encounter defects in functionality, please submit an issue.

Description Issue
Docker build fails if directory name contains spaces #38
Models can be picked up from previous build #71
Difficult to get normalized coordinates for spatial analytics parameters #87
Pipeline failure in some multi-GPU systems #98
Intermittent 30s delay in pipeline start during multi-stream sessions #104
Kubernetes deployment fails if no_proxy contains * #105
Client is incompatible with older versions of the service #107
Server crashes after several minutes when using RTSP camera and GPU inference #111

Tested Base Images

Supported base images are listed in the Building Intel® DL Streamer Pipeline Server document.

* Other names and brands may be claimed as the property of others.