You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am converting a large matrix with dimensions 261x252000000 to an NWB file. I am using the iterative data writing method. However, it takes a lot of time to finish the conversion. The computer where the large matrix is located and where we want to store the NWB file has a GPU.
Hence, I am wondering if GPU power can be used to accelerate this procedure. If this is not possible, could you suggest another approach to enhance the speed of the conversion?
Many thanks for considering my request!
The text was updated successfully, but these errors were encountered:
The limiting factor on performance for conversion is not compute power (i.e., CPU or GPU resources) but the I/O bandwidth. Generally speaking, in the iterative write I would try and increase the amount of data that is being written at-a-time, e.g., instead of writing 10MB per iteration try to write 1GB per iteration.
@GoktugAlkan I just wanted to check in with you to learn if this is still an issue you are interested in resolving. I am also wondering if this issue mainly is connected to this comment: NeurodataWithoutBorders/pynwb#1685 (comment)
That looks like something that should be debugged.
Hello,
I am converting a large matrix with dimensions
261x252000000
to an NWB file. I am using the iterative data writing method. However, it takes a lot of time to finish the conversion. The computer where the large matrix is located and where we want to store the NWB file has a GPU.Hence, I am wondering if GPU power can be used to accelerate this procedure. If this is not possible, could you suggest another approach to enhance the speed of the conversion?
Many thanks for considering my request!
The text was updated successfully, but these errors were encountered: