Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

f3write speed graph #220

Open
axet opened this issue Jul 1, 2024 · 4 comments
Open

f3write speed graph #220

axet opened this issue Jul 1, 2024 · 4 comments

Comments

@axet
Copy link

axet commented Jul 1, 2024

Hello!

Modern SSD has different speeds depend on data written. For small amounts (few GB) write speed can be 400MB/s, the rest of flash writing at 40MB/s speed. This is normal. But f3write tool only reporting average speed.

This should be addressed and average speed replaced with speed graph. Lets say current output:

Creating file 235.h2w ... OK!                        
Creating file 236.h2w ... OK!                        
Creating file 237.h2w ... OK!                        
Free space: 0.00 Byte
Average writing speed: 60.20 MB/s

Expected output:

Creating file 235.h2w ... OK!                        
Creating file 236.h2w ... OK!                        
Creating file 237.h2w ... OK!                        
Free space: 0.00 Byte
10% 400MB/s
20% 400MB/s
30% 30MB/s
...
90% 30MB/s
100% 30MB/s
Average writing speed: 60.20 MB/s

it could be smart and detect speed drops measuring exact GB speed get lower significantly:

Expected output:

Creating file 235.h2w ... OK!                        
Creating file 236.h2w ... OK!                        
Creating file 237.h2w ... OK!                        
Free space: 0.00 Byte
0-1% 400MB/s
1-100% 30MB/s
Average writing speed: 60.20 MB/s
@axet
Copy link
Author

axet commented Jul 3, 2024

For example I wrote simple hddtest script, which reading data from device and print its reading speed per sector. Very helpfull for HDD testing, you can see potential badblocks. But also usefull for SSD, since you can see actual reading speed per % of device. Here is an example output for a read SSD device:

root@axet-desktop:~/bin# hddtest.py -m 1000000000 -o $(date +"%Y-%m-%d").txt /dev/nvme0n1

duration:  0:03:48
sector size:  512
total sectors:  1000215216
sectors read:  1000215216
badblocks count: 0 - 0%

250ns 939714560 93%
500ns 50001920 4%
5000ns 6814384 0%
10000ns 1562624 0%
100000ns 2121728 0%

read speeds:
  5% 668.0MiB/s
 10% 1.3GiB/s
 15% 1.3GiB/s
 20% 2.9GiB/s
 25% 3.0GiB/s
 30% 2.9GiB/s
 35% 2.9GiB/s
 40% 3.0GiB/s
 45% 3.0GiB/s
 50% 3.0GiB/s
 55% 3.0GiB/s
 60% 3.0GiB/s
 65% 3.0GiB/s
 70% 3.0GiB/s
 75% 3.0GiB/s
 80% 2.9GiB/s
 85% 3.0GiB/s
 90% 3.0GiB/s
 95% 3.0GiB/s
100% 2.9GiB/s

It would be nice to have for f3write tool with percent write speeds table.

Here is a link https://gitlab.com/axet/homebin/-/blob/debian/hddtest.py

@axet
Copy link
Author

axet commented Jul 9, 2024

I did a little bit research and find out that random generator used in f3 is very slow. Its speed according to my tests only can reach 2GB/s at peak performance (on my hardware AMD Ryzen 5 5600H). Modern SSD can get 7GB/s IO performance. Which means f3 generator need improvements since it performance isn't enough for best SSD on market.

Here is my C test code to see maximum f3 generator performance:

Here is no way you can generate random data at speeds above 2GB on average hardware and the only option would be by using pre-generated data. Here a few python examples which proves my point:

This new generator can still be complex enough to make data unpredictable from SSD side and can not be faked by the drive. For making it work I expand my hddtest.py script with ability to write (--write) random sequence data to SSD and read (--read) it back at 20GB/s speed. New generator using dictionary of 128MB and reading data with random offset from that dictionary. That would make it impossible for SSD to predict the data since all bytes blocks never repeats it self in byte-to-byte sector blocks. Evey new block produced by generator has random offset of initial block and looks like shifted data. But since offset is random and dictionary size is big all blocks are different. Here is my code (python):

@AltraMayor
Copy link
Owner

Hi @axet,

On the speed graph. Your Python script averages the write speed for every 5% of data written. This is why you obtained average write speeds that were much closer to each other than what you expected in your first post. To approximate your original expectation, you need to bin the write speed samples similarly to what you did with write times.

On the speed of the random generator. The solution that people have been employing to test huge, fast drives is to have multiple instances of f3write running simultaneously. Someone even posted a script to do that somewhere. Changing the random generator would break compatibility with H2testw; this was a user request.

Your ideas help reduce the effort required to implement them. But if you want to see them implemented, you should plan to convert them into pull requests. If you're unfamiliar with C, you can implement your version of F3 in Python, and I'll add a link to your work somewhere in the documentation so people can find it.

@axet
Copy link
Author

axet commented Jul 9, 2024

  1. I do write average per selected percent by the user (5% by default). I'm not sure what are you talking about. The code showing same logic as for write sector speeds / percent average speeds.

  2. I do not try to change f3 project for security reasons (since my algorithm can be compromised by lack of my experience). And for compatibility reasons with h2testw.

  3. I only share ideas, which can make this project better. Yet I think my python script is perfectly fine to replace f3write/read for my personal purpose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants