-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python documentation #90
Comments
The python bindings should aim to closely match the C++ implementation. You're right I don't think we have explicit 'python' documentation for this. Is there a way you would anticipate that this should be provided separately? |
Hello! Thanks for the swift reply!
What I wish for is something like an example of actually doing something
with the frames.
The examples while great and well documented only prints metadata about the
frames. And not how to actually output these to disk or perhaps piping them
to an external process such as ffmpeg.
The reason why I liked the original picamera so much is because of the
extensive documentation with mmalobj which gave me python access to the
lower level bits of mmal.
See this project
https://github.com/autosbc/rpidashcam/blob/master/dashcam.py that makes use
of mmalobj directly. It allowed me to get good performance out of my
application without the overhead of picamera.
And I would like to be able to rewrite my application with libcamera (in
python) without actually requiring picamera2.
https://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html see the
clocksplitter example which is what I based my code on. An example that
does the same would be enough for a least me.
I'd even try to venture into C++ if there were examples in C++ on how to do
the same.
Den sön 4 feb. 2024 10:47Kieran Bingham ***@***.***> skrev:
… The python bindings should aim to closely match the C++ implementation.
You're right I don't think we have explicit 'python' documentation for
this. Is there a way you would anticipate that this should be provided
separately?
—
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKKHTJJPHU2NGR6BRITXE3YR5KMZAVCNFSM6AAAAABCYJ5WFOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRVGY3DMOJSGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I've been trying to use libcamera for a few days and I realized I must mmap stuff. I've taken In def process_request(request):
global camera
global dashcam_title_image
print()
# When a request has completed, it is populated with a metadata control
# list that allows an application to determine various properties of
# the completed request. This can include the timestamp of the Sensor
# capture, or its gain and exposure values, or properties from the IPA
# such as the state of the 3A algorithms.
#
# To examine each request, print all the metadata for inspection. A custom
# application can parse each of these items and process them according to
# its needs.
#requestMetadata = request.metadata
#for id, value in requestMetadata.items():
# print(f'\t{id.name} = {value}')
# Each buffer has its own FrameMetadata to describe its state, or the
# usage of each buffer. While in our simple capture we only provide one
# buffer per request, a request can have a buffer for each stream that
# is established when configuring the camera.
#
# This allows a viewfinder and a still image to be processed at the
# same time, or to allow obtaining the RAW capture buffer from the
# sensor along with the image as processed by the ISP.
buf = None
buffers = request.buffers
length = 0
for _, buffer in buffers.items():
metadata = buffer.metadata
#print(dir(buffer.planes))
plane = buffer.planes[0]
buf = mmap.mmap(plane.fd, plane.length, mmap.MAP_SHARED,
mmap.PROT_WRITE | mmap.PROT_READ)
img = Image.frombuffer(
'L',
(800,600),
buf,
'raw',
'L',
0,
1
)
#print(dir(img))
img.paste(
dashcam_title_image,
(0, 0)
)
img.save("test.png")
exit()
# Print some information about the buffer which has completed.
#print(f' seq: {metadata.sequence:06} timestamp: {metadata.timestamp} bytesused: ' +
# '/'.join([str(p.bytes_used) for p in metadata.planes]))
# Image data can be accessed here, but the FrameBuffer
# must be mapped by the application
# Re-queue the Request to the camera.
request.reuse()
camera.queue_request(request) But the result is kind weird, I am getting: The overlay gets added correctly but the rest of the image is seemingly garbage. The camera is connected correctly, because if I use libcamera-vid or something I get proper video. |
It's hard to see what's going on from only a small snippet. If you're getting raw data, then does that mean you are asking for the RAW bayer data from the camera? Have you configured a pixel format correclty? Or checked that the pixel format that is returned is what you expect? |
I have taken simple-cam and only modified process_request and process_request is what you can see in it's entirety in the snippet above. How can I know what pixelformats are available as far as libcamera in python goes? |
Okay, I managed to set pixelformat to yuv420 (and size!) and was able to actually get a picture. Now I just need to figure out how to get the picture in color. In my old code based on mmal (mmalobj) I just specified yuv420 as format and then I would get a picture in color which I pasted an image over and then sent it back to the output ports for encoding. Is something similar required here? |
a greyscale image suggests you're only getting a single plane from the YUV420 perhaps? |
It is, but it seems to be too big for |
Then that probably means plane[1] might not be real. Does the python interface tell you how many planes there are ? I wonder if https://git.libcamera.org/libcamera/libcamera.git/tree/src/py/cam might be a way for you to find better examples to what you need, as that's more of a fully implemented python equivalent of our C++ cam implementation. |
|
Also, thanks for the link. I'll try to play around with it! 🙇♂️ |
Seems that examples in the other repo you provided I face the same issue as with https://github.com/kbingham/libcamera/blob/master/src/py/examples/simple-continuous-capture.py There is no |
It's here: |
Maybe there's a discrepancy between what is installed with Raspberry PI os and libcamera VCS? Because I can't import it. |
Just copy that MappedFrameBuffer.py into your project. I don't think it's exposed as part of the public API, it's just a utility written. |
Hello! I know
picamera2
exists. But I really do not like to use it. It's very high level and hides a lot of stuff which I do not like. I'd like to use libcamera very much like I usedpicamera
(the original)mmalobj
see https://picamera.readthedocs.io/en/release-1.13/api_mmalobj.html#performance-hints for instance where I could go very low level. It would be good if there could be such documentation.Thanks! :)
The text was updated successfully, but these errors were encountered: