-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about hdf5 #11
Comments
https://pytorch.org/audio/datasets.html#yesno |
Besides, I found a absurd phenomenon. e.g. all my wav files are under /wav folder. first I have a read wav function like this, |
Besides, I found a absurd phenomenon. e.g. all my wav files are under /wav folder. first I have a read wav function like this,
in fact, when I set the num_workers from 0 to 4 (My work station equipped with 8 cpus), the speed do not |
I have several hundred GB wav files on my disk (about 1, 000 hours wav data). I found directly reading the wav file is slow for training, so I choose lmdb and hdf5 as options. However I found that
hdf5 do not support concurrent, i.e. num_workers in Dataloader can not be more than 1, how do you solve this problem? thx
The text was updated successfully, but these errors were encountered: