-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash trying to encode large buffer #17
Comments
Hi, thanks for the great bug report! I could easily see what went wrong and reproduce the issue. The problem was a bit tricky to investigate and fix, as it boiled down to The possible solutions are to modify Please let me know if the fix works for you or if you have anything else to add 😄 Edit: minor wording tweaks according to a (now deleted) comment by @emoon, thanks! |
Related upstream PR: xiph/vorbis#104 |
If the issue isn't fixed in vorbis it's also possible inside the Rust code to do a similar workaround to what I did in my own code (i.e you send smaller chunks of the buffer over to the C code instead) |
Yes, that's also a valid option, although the exact maximum buffer size is platform-dependent and thus difficult to calculate correctly. Given that |
Wanted to make an update about this here. So I have tested with the latest code and it does indeed fix the crash, but there is another problem with it. When I use my old code where I send chunks of samples (in my case 48k for each channel) to When I use the latest code and send the whole buffer it takes about 150 sec (!) to do the encoding now. So almost 50x longer. Now I'm sure it's not the fault of the Rust crate here, but this is something users may run into if they also send in a large buffer and have no idea what to expect of the encoding time. |
While at it, let's also make it clear that input samples are conventionally assumed to be in the [0, 1] interval. Related issue: #17
Great catch! I attached a profiler to two runs of the encoder over a 5 minutes, mono, 44.1 kHz song, the only difference being that in one run I chunked the input in blocks of 65536 samples, while in the other I handed off the entire song to the encoder in a single block. The results were as follows: As it stands out from the results of the profiler, the slowdown in the non-chunked case is entirely due to a very slow implementation of In stark contrast, when chunking the input into smaller blocks, the encoding is responsible for most CPU cycles, even though In any case, there is not anything this Rust crate can do about the slowdown, other than trying to come up with some libvorbis patch to improve the performance of For now, I've added the following note to the
|
Hi,
First of all thanks for a great crate.
I'm running into a crash when I'm trying to encode a "large" buffer. Here is a small repro case for it.
Callstack
I can workaround this issue by manually sending smaller chunks of the data to
encode_audio_block
but I don't see any limitation in the API/docs that would indicate that I have to do that.The text was updated successfully, but these errors were encountered: