Skip to content
This repository has been archived by the owner on Jul 10, 2023. It is now read-only.

How to add jpegenc (to generate snapshot) and splitmuxsink (to generate video) together in a single pipeline? #120

Open
dhaval-zala-aivid opened this issue Nov 14, 2022 · 3 comments

Comments

@dhaval-zala-aivid
Copy link

@nnshah1

How to add jpegenc (to generate snapshot) and splitmuxsink (to generate video) together in a single pipeline?
I am able to add jpegenc or splitmuxsink separately in pipeline and its running properly. But if I add both together in a single pipeline Its generate snapshot only and video with 0 byte in size.
I have tried multiple combination for this issue. But not able to solve.

Here is the pipeline:

{
	"type": "GStreamer",
	"template": ["urisourcebin name=source ! decodebin ! video/x-raw ",
					" ! videoconvert name=videoconvert",
					" ! tee name=t ! queue",
					" ! gvadetect model={models[object_detection][coco_yolov5_tiny_608to416_FP32][network]} name=detection",
					" ! gvametaconvert name=metaconvert",
					" ! jpegenc ! gvapython name=gvapython_n module=/home/pipeline-server/server/pplcount.py class=ImageCapture",
					" ! appsink name=appsink",
					" t. ! videoconvert ! x264enc ! splitmuxsink muxer=avimux location=\"/tmp/temp-%d.mp4\" max-size-time=30000000000"
					],
	"description": "Object detection pipeline extended to add frame count to meta-data and save frames to disk",
	"parameters": {
		"type": "object",
		"properties": {
			"detection-device": {
				"element": "detection",
				"type": "string"
			},
			"inference-interval": {
				"element": "detection",
				"type": "integer"
			},
			"add-empty-results": {
				"element": "metaconvert",
				"type": "boolean",
				"default": true
			},
			"max-files": {
				"element": "filesink",
				"type": "integer",
				"default": 1000
			},
			"recording_prefix": {
				"type": "string",
				"element": {
					"name": "splitmuxsink",
					"property": "location"
				},
				"default": "/home/pipeline-server"
			}
		}
	}
}
@mikhail-nikolskiy
Copy link

You can probably use jpegenc and multifilesink to save .jpg images, ex

! jpegenc ! multifilesink location=img_%06d.jpg

@dhaval-zala-aivid
Copy link
Author

@mikhail-nikolskiy

I want to save jpg based on post-processing. I don't want to save each frames that's why I have used ImageCapture class in pipeline to save the particular snapshots only. What you have suggested will capture the image every frame is not required in my case.

@nnshah1
Copy link

nnshah1 commented Dec 1, 2022

There seems to be an interaction between the x264enc and the queue that stalls the pipeline (in my local experiments). Using vaapi elements I was able to get the output correctly. You may try putting a queue after the second t branch. The other thing to note is if you name the splitmuxsink element as splitmuxsink - pipeline server will set the location automatically to include the timestamp for the recording.

attaching the pipeline.json file and save_jpeg.py file for reference.

{
	"type": "GStreamer",
    "template": ["{auto_source} ! decodebin",
                  		" ! tee name=t ! queue",
				" ! gvadetect model={models[object_detection][person_vehicle_bike][network]} name=detection",
				" ! gvametaconvert name=metaconvert ! jpegenc ! gvapython module=/home/pipeline-server/save_jpeg.py",
		                " ! appsink name=appsink",
		                " t. ! queue ! vaapipostproc ! vaapih264enc ! splitmuxsink name=splitmuxsink muxer=avimux max-size-time=30000" 
			],
	"description": "Person Vehicle Bike Detection based on person-vehicle-bike-detection-crossroad-0078",
	"parameters": {
		"type": "object",
		"properties": {
			"detection-properties": {
				"element": {
					"name": "detection",
					"format": "element-properties"
				}
			},
			"detection-device": {
				"element": {
					"name": "detection",
					"property": "device"
				},
				"type": "string",
				"default": "{env[DETECTION_DEVICE]}"
			},
			"detection-model-instance-id": {
				"element": {
					"name": "detection",
					"property": "model-instance-id"
				},
				"type": "string"
			},
			"inference-interval": {
				"element": "detection",
				"type": "integer"
			},
			"threshold": {
				"element": "detection",
				"type": "number"
			},
   		    "recording_prefix": {
			"type":"string",
			"default": "/home/pipeline-server"
		    }
		}
	}
}
from gstgva.util import gst_buffer_data
import gi
gi.require_version("Gst", "1.0")
# pylint: disable=wrong-import-position

from gi.repository import Gst

count = 0
def process_frame(frame):
    global count
    count +=1
    buffer = frame._VideoFrame__buffer
    with gst_buffer_data(buffer,Gst.MapFlags.READ) as data:
        filename = "frame-{}.jpeg".format(count%10)
        with open(filename,"wb",0) as output:
            output.write(data)
    return True

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants