-
Notifications
You must be signed in to change notification settings - Fork 33
/
Copy pathBENCHMARKING
187 lines (125 loc) · 5.19 KB
/
BENCHMARKING
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
A GUIDE TO BENCHMARKING NBDKIT
General comments
================
* The plugin matters! Different plugins have completely different
uses, implementations and threading models. There is little point
in talking generically about “the performance of nbdkit” without
mentioning what plugin you are testing.
* The client matters! Does the client support multi-conn? Does the
client use the oldstyle or newstyle protocol? Has the client been
written with performance in mind? The currently best clients are
(a) the Linux kernel (nbd.ko), (b) qemu, and (c) fio. Make sure you
are using recent versions and have multi-conn enabled.
* Filters impair performance! When benchmarking you should never use
filters unless filters are what you are trying to benchmark.
Testing using fio
=================
FIO is a Flexible I/O tester written by Jens Axboe, and it is the
primary tool used for generating the load to test filesystems and
block devices.
(1) Install libnbd.
(2) Clone and compile fio:
https://github.com/axboe/fio
using:
./configure --enable-libnbd
(3) Edit the test file in examples/nbd.fio, if required.
(4) Run nbdkit and fio together. From the fio source directory:
rm -f /tmp/socket
nbdkit -f -U /tmp/socket null 1G --run './fio examples/nbd.fio'
If you want to use nbdkit from the source directory too, change
‘nbdkit’ to the path of the wrapper, eg:
rm -f /tmp/socket
../nbdkit/nbdkit -f -U /tmp/socket null 1G --run './fio examples/nbd.fio'
Variations
----------
* Try adjusting the number of fio jobs (threads).
* Try adjusting the number of nbdkit threads (nbdkit -t option).
* Use other plugins. Both nbdkit-memory-plugin and nbdkit-file-plugin
are important ones to test.
* Run nbdkit under perf:
perf record -a -g --call-graph=dwarf -- \
server/nbdkit -f -U /tmp/socket \
./plugins/null/.libs/nbdkit-null-plugin.so 1G
Testing using the Linux kernel client
=====================================
Step (1) is the same as above - obtain or compile fio.
(2) Create the fio configuation file.
Create /var/tmp/test.fio containing:
----------------------------------------------------------------------
[test]
rw=randrw
size=64m
directory=/var/tmp/nbd
ioengine=libaio
iodepth=4
direct=1
numjobs=8
group_reporting
time_based
runtime=120
----------------------------------------------------------------------
(3) Run nbdkit.
From the nbdkit source directory:
rm -f /tmp/socket
./nbdkit -f -U /tmp/socket memory 1G
(4) Loop mount the NBD server:
modprobe nbd
nbd-client -C 8 -unix /tmp/socket /dev/nbd0
mkfs.xfs -f /dev/nbd0
mkdir /var/tmp/nbd
mount /dev/nbd0 /var/tmp/nbd
(5) Run the fio test:
fio /var/tmp/test.fio
Testing using qemu
==================
Qemu contains an NBD client with excellent performance. However it's
not very useful for general benchmarking. But two tests you can
perform are described below.
Test linear copying performance
-------------------------------
In some situations, linear copying is important, particularly when
copying large disk images or virtual machines around. Both nbdkit and
the qemu client support sparseness detection and efficient zeroing.
To test copying speed you can use ‘qemu-img convert’, to or from
nbdkit:
nbdkit -U - memory 1G --run 'qemu-img convert file.qcow2 -O raw $nbd'
nbdkit -U - memory 1G --run 'qemu-img convert $nbd -O qcow2 file.qcow2'
Notes:
* In the second case, because the memory plugin is entirely sparse
and zero, the convert command should do almost no work. A more
realistic test might use the file, data or pattern plugins.
* Try copying to and from remote sources like nbdkit-curl-plugin and
nbdkit-ssh-plugin.
* nbdkit-readahead-filter can optimize copying when reading from
nbdkit. This filter can particularly affect performance when the
nbdkit plugin source is remote (eg. nbdkit-curl-plugin).
* qemu-img has options for optimizing number of threads and whether
out of order writes are permitted.
Test end-to-end VM block device performance
-------------------------------------------
Set up a virtual machine using an NBD block device, connected to
nbdkit. On the qemu command line you would use:
qemu ... -drive file=nbd:unix:/tmp/sock,if=virtio,format=raw ...
In libvirt you would use:
<devices>
<disk type='network' device='disk'>
<driver name='qemu'/>
<source protocol='nbd'>
<host transport='unix' socket='/tmp/sock'/>
</source>
<target dev='vda' bus='virtio'/>
</disk>
</devices>
Set up nbdkit to serve on the Unix domain socket:
nbdkit -U /tmp/sock memory 1G
Inside the guest you will see a block device like /dev/vdX which is
backed by the nbdkit instance, and you can use fio or other filesystem
testing tools to evaluate performance.
This is very much a real world, end-to-end test which tests many
different things together, including the client, guest kernel, qemu,
virtio transport, host kernel and nbdkit. So it's more useful as a
way to detect that there is a problem, rather than as a way to
identify which component is at fault.
If you have sufficiently recent kernel and qemu you can try using
virtio-vsock as the transport (instead of a Unix domain socket), see
AF_VSOCK in nbdkit-service(1).