-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to set pipe buffer size? #85
Comments
Yes, I think that's all correct. The
Right, there's no convenience wrapper for this right now. What you can do is call Note that this would also work with |
Thanks for the quick reply. I'll use Btw, is it planned to add support for incremental writing in duct? (E.g. with a |
Proooooobably not :) The main reason we have So given all that, if we also defined a |
@oconnor663 Why can't dropping the writer be used as that indication? Streaming into a child-process is a common use case so it would be nice if duct could support it as well. Please consider it :) |
Imagine a situation where you're writing something to the child (let's say some raw video), and the child is writing its output to disk (let's say the re-encoded video). Your write loop ends as soon as it writes the last byte to your in-process buffer. If you drop the writer at that point, your |
Can't the handle to the child process be decoupled from the writer? |
@oconnor663 It works now but I'm getting a lot of dropped frames when reading from 2 webcams at the same time (separate ffmpeg child processes, even in separate threads). I tried different sizes for (So I could set it to like |
Maybe, but at that point is it any different from passing in an
If latency is the main issue, are you sure you want buffering at all? Why not read/write directly from/to your pipes, so that data is always immediately available to the recipient? It sounds like you're working with large enough frames that system call overhead shouldn't be an issue. If you're using a large read buffer to avoid blocking the writer on the other end, that's not how in-process read buffers work, as it sounds like you've noticed below. Your reader process isn't going to try to fill its buffer until you call read(), so the writer process will still tend to block when it fills up the OS buffer, 64 KiB by default on Linux. If promptly draining the pipe is important, you might want to have a dedicated thread reading it.
I don't think this is how it works. The Python docs say that It sounds like you're seeing different behavior between Python and Rust+duct+os_pipe, but I'm not clear on the exact differences. The Python version is working better? We're getting to the level of detail where it's hard for me to know what's going on without reading your code.
You'd need to write OS-specific code using
This sounds much more complicated than any use case I've considered before. Will ffmpeg start dropping frames as soon as any pipe write would block? Or will it do some amount of internal buffering? If so, is that amount configurable? These seem like critical details, and I have no idea how they work. In my mind (without any experience doing anything this complicated with video pipes), it seems like the best approach would be to have our broker process be as dumb as possible, and just read-write data as quickly as possible using dedicated threads to avoid introducing any extra latency. Then we could trust ffmpeg to handle the more complicated details around buffering and backpressure, because it's the Expert System that knows what the heck is going on. If we try to tune our buffering in the middle based on what we think ffmpeg is going to do in response, we're likely to get that wrong? |
By the way, what is your broker process's actual job? Is it just copying between pipes, or does it do some processing in the middle? |
But it seems to be only possible on Unix. I'm on Windows 8.1 btw. Could a duct
Yeah, it seems the https://gist.github.com/Boscop/86c50317d407460fd5c2a8900fffbeec#file-ffmpeg_webcam_input-rs-L63-L75 The ffmpeg-calling thread sends each webcam frame over to the opengl thread where it is then written to a texture, because each webcam is rendered as a virtual screen (quad) in the scene. When running this with 2 webcams, it works most of the time, sometimes even with 4. But not always: Sometimes I still get dropped frames and on some runs, the 2nd webcam only results in a glitched frame that stays constant while the process is running. Any idea why that could be? (This is very bad.) While running with 4 webcams I profiled using VerySleepy and this were the top exclusive functions: Not sure which part of my code is the bottleneck, how much of the bottleneck is writing the frame data to the textures? Any idea what I can still optimize? :)
I haven't tried the Python code but with that 100 MB buffer it probably has huge latency (I'm wondering why they are using such a large buffer). For my real-time use case I'm trying to achieve minimum latency (also soon with reading frames from video files in a similar way as input).
Not sure what criteria has for dropping frames, I'll soon run some tests because when I start reading from video files, dropped frames would be bad because I want to record my opengl output as a video, which should have the same number of frames as the inputs, and they have to match up in the timeline.
Not sure what you mean by "broker". The purpose of the process is to read in multiple inputs (each input either a webcam or video) using ffmpeg (all at the same framerate), write them to textures that represent virtual screens in a 3d scene that are post-processed with music-synced shader effects + other synced physical objects / shaders (such that every change in the music has a visual equivalent), then render the full scene as equirectangular projection onto the screen, fetch each frame from the GPU, pipe into another ffmpeg process that writes an 360° video (in 8K resolution). |
I expect this to work on Windows. Are you hitting an error? Could you copy it here?
I suppose it all depends where your bottleneck is expected to be, and who's responsible for making the decision about dropping frames? Should you rely on ffmpeg to drop frames for you, and organize your piping and channels to provide backpressure as quickly as possible? Or should you take all the frames you can from ffmpeg, and figure out whether you need to drop frames once they're in your own buffers? I don't have any experience in this area, so I don't know which way is the Right Way, but I suspect it's important to pick one approach and stick to it consistently at each step of the pipeline. |
No idea about optimizations, but you could consider wrapping your pipes and channels in heavy logging, so that you know when each read comes in and when each write finishes. That should give you an idea about where in the pipeline the bottleneck begins? |
Ah thanks, it turns out it works on Windows. It was because of the docs that I thought it's unix-only because it's Btw, this is how I'm writing the video frames out now: Btw, right now I'm doing |
If you just want to log both streams to disk, you can open both target files and then pass those files to the |
@oconnor663 Thanks, that works. Btw, is there a crate for splitting a string into cmd args? I can't just split by whitespace because it might be escaped or inside a string, but I'd like to read cmd args from my gui's edit field, which is a string containing multiple args that needs to be split before I can call duct. |
Hmm, there likely is something, though I've never done it and don't know anything off the top of my head. Another option you could consider is https://crates.io/crates/duct_sh, which is a tiny wrapper library that handles spawning shell commands via Duct on Unix and Windows. You could just pass the user's command directly to the shell? That's assuming your user is completely trusted, of course. And you'd be exposing users to the differences between the Windows shell and the Unix one(s), but that's essentially already the case just from launching commands by name I think. |
How can I set the size of the pipe buffer with
duct
?I need to set a buffer of size 100 MB for reading video frames from a ffmpeg child process (streamingly, reading each frame before it terminates) through a pipe like
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)
but in Rust withduct
:)Is the pipe unbuffered by default and if I do
BufReader::with_capacity(100_000_000, cmd!(...).reader()?)
, will that be equivalent to that python code?And for writing video frames to another ffmpeg childprocess, how can I incrementally feed data to its stdin? (Using duct as well.) I don't see a
.writer()
method..And how can I then close the stdin of the video-writing ffmpeg child-process to tell it that no more frames are incoming, so that it can finalize writing the video? (I don't want to forcefully kill it when my process terminates, because then it wouldn't be finalizing the video file.)
The text was updated successfully, but these errors were encountered: