-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
f3write speed graph #220
Comments
For example I wrote simple hddtest script, which reading data from device and print its reading speed per sector. Very helpfull for HDD testing, you can see potential badblocks. But also usefull for SSD, since you can see actual reading speed per % of device. Here is an example output for a read SSD device:
It would be nice to have for f3write tool with percent write speeds table. Here is a link https://gitlab.com/axet/homebin/-/blob/debian/hddtest.py |
I did a little bit research and find out that random generator used in f3 is very slow. Its speed according to my tests only can reach 2GB/s at peak performance (on my hardware AMD Ryzen 5 5600H). Modern SSD can get 7GB/s IO performance. Which means f3 generator need improvements since it performance isn't enough for best SSD on market. Here is my C test code to see maximum f3 generator performance: Here is no way you can generate random data at speeds above 2GB on average hardware and the only option would be by using pre-generated data. Here a few python examples which proves my point: This new generator can still be complex enough to make data unpredictable from SSD side and can not be faked by the drive. For making it work I expand my hddtest.py script with ability to write (--write) random sequence data to SSD and read (--read) it back at 20GB/s speed. New generator using dictionary of 128MB and reading data with random offset from that dictionary. That would make it impossible for SSD to predict the data since all bytes blocks never repeats it self in byte-to-byte sector blocks. Evey new block produced by generator has random offset of initial block and looks like shifted data. But since offset is random and dictionary size is big all blocks are different. Here is my code (python): |
Hi @axet, On the speed graph. Your Python script averages the write speed for every 5% of data written. This is why you obtained average write speeds that were much closer to each other than what you expected in your first post. To approximate your original expectation, you need to bin the write speed samples similarly to what you did with write times. On the speed of the random generator. The solution that people have been employing to test huge, fast drives is to have multiple instances of Your ideas help reduce the effort required to implement them. But if you want to see them implemented, you should plan to convert them into pull requests. If you're unfamiliar with C, you can implement your version of F3 in Python, and I'll add a link to your work somewhere in the documentation so people can find it. |
|
Hello!
Modern SSD has different speeds depend on data written. For small amounts (few GB) write speed can be 400MB/s, the rest of flash writing at 40MB/s speed. This is normal. But f3write tool only reporting average speed.
This should be addressed and average speed replaced with speed graph. Lets say current output:
Expected output:
it could be smart and detect speed drops measuring exact GB speed get lower significantly:
Expected output:
The text was updated successfully, but these errors were encountered: