You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This was originally brought up by @dbaeumer - having to have a function call per directory entry to read through the directory entry stream might introduce unnecessary performance costs for iterating large directories.
It could be useful to have the ability to fetch a chunked list by specifying a read variant that takes some kind of upper chunk limit of directory counts at a time to batch reads together.
The text was updated successfully, but these errors were encountered:
Yes, this makes sense to look into. The current iterator API is inspired by POSIX readdir which iterates over entries one at a time, with typical implementations reading in batches under the covers. But if benchmarks show that batching would be faster, we should consider it.
When this was originally raised it was based on performance benchmarking, which was later clarified to not be using the Wasmtime compliation cache when running the component benchmarks - when enabled the benchmarks demonstrated equality with preview1 and less call overhead.
@dbaeumer do you think this is something that should still be a priority for consideration to provide a noticeable performance improvement?
@guybedford for VS Code this is not necessary anymore. The performance problem raised from the fact that we add to do a context switch for every read. I decided to but the result in shared memory to avoid such a context switch.
However other implementation might hit the same problem. But from the VS Code side this can be closed
This was originally brought up by @dbaeumer - having to have a function call per directory entry to read through the directory entry stream might introduce unnecessary performance costs for iterating large directories.
It could be useful to have the ability to fetch a chunked list by specifying a
read
variant that takes some kind of upper chunk limit of directory counts at a time to batch reads together.The text was updated successfully, but these errors were encountered: