-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(Not so) Secret test plan #69
Comments
It's probably more likely that users have a wired computer than a fast enough router with user installable software.
You can only measure one of them in a single test. You'll need the latency peer to figure out which one you're measuring. I wouldn't recommend that approach though. |
In fact, now mini PCs using Intel N100 CPU is so cheap that I do not want to get Raspberry Pi 5 already. Even my two Raspberry Pi 400 units are kind of collecting dust. My modest "home lab" consists of three Intel N100 mini PC and two switches. One of the Intel N100 mini PC has dual 10G SFP+ ports. One 2.5G switch has two SFP+ ports as well. Then I have a few 2.5G capable WiFi routers as APs. Then I have two TB4 to 10G NICs for my laptop and Mac Mini M1 for 10G testing. |
We're network testing nerds, who have lots of hardware lying around that we can throw into the breach. But I'm thinking of a somewhat different use case - I call them "drive by" bufferbloat testers. I have been looking for high quality/accuracy network test software that has trivial setup. That's why I like Crusader - it's a small, all-in-one test package 1. As @Zoxc points out, most techies who have heard about bufferbloat are going to have a second device/laptop. They can connect it to Ethernet, start a Crusader server, and test their Wifi bloat. Footnotes
|
Maybe using a container like the following and add iperf3/flent/crusader and OOkla SpeedTest CLI to the container will help. No need to mesh with the host system. BTW, the good thing about using a Mini PC or a Raspberry Pi is that you can run it 24/7 along with the home router. I use PVE on two Intel N100 mini PCs and use LxC containers. Then I run OpenWRT/pfSense as VMs. There are also various Linux/BSD VMs for my open source projects involvement (mainly USB/MCU related). On the other Intel N100 mini PC with dual SFP+ ports, I am using Ubuntu 24.04 and Docker. |
Hmmm... It seems that I have misunderstood what the latency peer does. I kind of expected the latency peer would be an observer "in the middle" of the network path. My mental model was more like this diagram: In this image, the Crusader client on the laptop (on the left, using Wifi) could measure and display latency both to Crusader Server 1 (the "latency peer" on the local Ethernet LAN) and Server 2 (out "on the internet", on the right). That's why I was thinking that Crusader client could:
My questions:
Thanks PS Here's a plot using a latency peer. Test conditions are similar to diagram above: Client on my laptop over wifi; latency peer is RPi on Ethernet; Server is a VPS on the internet. I have a 75 mbps symmetric ISP connection. Looking at the plot below, I'm not sure what the latency peer plot shows. It seems to report the same latency values as to the main server |
The above setup is what I liked to last time since I used to have two public IPv4 addresses and two /56 IPv6 prefixes delegation with my old 1Gbos Fibre internet plan. So one network would serve the server 2 and the other network would host the client and server 1. Reference: My two home networks were using the same GPON ONT (1Gbps plan with dual IPv4 address, dual network, using VLAN capable switch to seperate the two home networks) GPON ONT -- TP-Link TL-SG105E smart switch -- Asus RT-AX86U router (network one, one public IPv4 address and /56 IPv6) Unfortunately I lost that capability and only have single public IPv4 address and single /64 IPv6 prefix delegation from my new 5Gbps internet. I need to explore other options. And hopefully we can have public (or semi public) Crusade servers for testing purposes. |
No. The latency peer measures latency from itself to the server, in addition to the regular client to server latency measurement. |
OK, now I understand. How had you envisioned that we use the information from the Latency peer? Thanks. |
The main use case I had in mind was the client as a wireless station and the peer as another wireless station and the server wired to an AP. That way we can test if a load from the client will interfere with another wireless station. |
I had not thought of that. For clarity, let me see if I understand what the Latency peer does:
Do I have this right? Thanks. |
Yeah. |
[I've been sitting on this idea for a while. I wanted to see if it still seemed like a good idea after I let it "cool down". I still think something like this would be useful...] Here's an idea for how to handle the latency peer's data that might serve both the two-wifi-station case and the wifi-bloat & ISP-bloat case. Sidenote: I am coming to think a better name for the latency peer might be "Latency Observer". The Latency Observer still continues to monitor latency between it and the Crusader server, as it always has. But the name is closer to its role as a passive observer (instead of actively injecting traffic). The problem: It's sometimes hard to compare the two plots, especially when they are similar. For example, in the charts above, both latencies are about 70 msec. I can't visually detect interesting differences between the two. A much more sensitive indicator is the difference between the two values to highlight the variations. How does "taking the difference" solve problems?
Thoughts? Thanks |
I see Crusader as an important component in the quest to characterize bufferbloat.
We have pretty good web-based tests (speedtest.net, Waveform, Cloudflare) and applications (flent, netperf) that test from the laptop to servers out on the internet. But there's no way to determine how much those results are influenced by latency induced by the local wifi.
Crusader makes it straightforward to test against a server on the local network. This gives a handle on the WiFi bloat, which can be factored out of the overall bloat measurements.
This leads to a few observations and questions:
I just realized that it might be possible to measure both "WiFi bloat" and "ISP bloat" in one Crusader test by using the Latency peer. Perhaps it would be possible to run a test against a Crusader server on the internet (to get overall latency), and also measure the latency to a peer on the local subnet. Do I have that right?
Not everyone has a spare Raspberry Pi lying around so I'm experimenting with Crusader installed directly on the router. This is probably a doomed effort: it appears that some (many? most?) routers (the Beryl MT3000, for example) aren't fast enough to support the Crusader server without affecting latency/throughput.
But maybe a Crusader server running on the router could act as a Latency peer without too much effect on its ability to pass traffic? (I suspect so...)
Thanks.
The text was updated successfully, but these errors were encountered: