-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad scaling due to flooding and overhead of copying packets in limactl #58
Comments
Yes, this is a long-standing TODO Lines 531 to 562 in 0b6aed9
|
@nirs can you point to the code where the copy in limactl occurs? I don't understand why there are so many copies. |
The pipelineLima:
QEMU:
Receiving a packet from a vmThis happens in the thread forwarding packets from client socket fd: Line 467 in f486d47
For each packet we read: Line 492 in f486d47
We send the packet to vmnet interface (copy 1): Line 518 in f486d47
and all other sockets (N-1 copies): Line 548 in f486d47
Receiving packet from vmnetThis happens int the vmnet handler block, called when some packets are ready on the vmnet interface: Line 283 in f486d47
We read multiple packets (up to 32 packets per call): Line 152 in f486d47
For each packet we iterate over all connection and write the packet to the connection (N copies): Line 191 in f486d47
Additional copies in limaEach packet read from VZ is copied to the socket_vment socket via a socketpair: Each packet read from socket_vmnet is copied to VZ via a socketpair: This is done for every VM using |
Throughput decreases and cpu usage increases significantly when adding more vms connected to the same socket_vmnet daemon.
Tested using:
iperf3 -c ...
iper3 -s
Expected behavior
Why it happens
When we have multiple vms connected to socket_vmnet:
Flow when receiving a packet from vmnet with 4 vms
Flow when receiving a packet from a vm
CPU usage for all vms processes
Looking at cpu usage of socket_vmnet, vm service processes, and limactl processes, we see that there is extreme cpu usage related with processing partly or completely unrelated packets:
Total cpu usage:
Tested on M1 Pro (8 performance cores, 2 efficiency cores)
Full results
1 vm
cpu usage
2 vms
cpu usage
3 vms
cpu usage
4 vms
cpu usage
The text was updated successfully, but these errors were encountered: