On the general topic of USB to 1000BASE-T (and now 2.5 GBaseT) dongles, for people who care about performance, it's good to know about the distinction between those that are USB devices and those that are PCI-Express devices.
Basically, what do you get if you hotplug it into a laptop running a current linux kernel and do "sudo lsusb -v" vs "sudo lspci -v"?
The ones that are native PCIE devices offer much better performance, up to 2.5 GBASET line rate, and will communicate with the host over the implementation of thunderbolt over USB.
The ones that are USB only might work okay, but there's a reason they're cheap.
Of course a cheaper laptop also won't have any implementation of thunderbolt on it, so that's something to consider as well.
Not only 2.5GBaseT. I have a 10GBase-T Thunderbolt dongle (from [1]). Okay, it's a little bigger than a normal dongle, and it has a USB-C female port instead of a builtin cable, and it gets warm. But it's basically a dongle, and I can get 9.4Gbit/s through it with iperf3 on my Mac.
I've only got superficial knowledge in this regard, so please take it with a grain of salt, but: the way I understand it is that PCIE has full direct memory access, so devices connected through it can use zero copy and similar techniques to access and process data much faster, especially with lower latencies than over regular USB. Using USB might/will require copying the data to transfer/read from and to different buffers, between user/kernel space, etc.
USB doesn’t provide any DMA (until USB 4) and requires more host cpu resources to meet the same bandwidth. It also has less consistent performance by virtue of the USB protocol itself.
at least for Gigabit speeds, the CPU usage is negligible if the device and the driver are communicating through CDC-NCM protocol, but yeah it's a significant hit if you're using CDC-ECM...,
When I worked on it, the USB controller was just a pci bus device that once set up, the incoming data, from a USB ADC, streamed the data in blocks directly to memory. Maybe they took all that back out.
They didnt remove anything. Did the USB Controller DMA Master support DMA chaining or command lists?
Ethernet controller being a dma master means it can continually plop packets where it wants without CPU intervention. Infamously Realtek RTL8139 10/100M chip was the first Realtek with DMA mastering support, but it was brain dead implementation https://people.freebsd.org/~wpaul/RealTek/3.0/if_rl.c:
>"The RealTek 8139 PCI NIC redefines the meaning of 'low end.' This is
probably the worst PCI ethernet controller ever made, with the possible
exception of the FEAST chip made by SMC. The 8139 supports bus-master
DMA, but it has a terrible interface that nullifies any performance
gains that bus-master DMA usually offers.
For transmission, the chip offers a series of four TX descriptor
registers. Each transmit frame must be in a contiguous buffer, aligned
on a longword (32-bit) boundary. This means we almost always have to
do mbuf copies in order to transmit a frame, except in the unlikely
case where a) the packet fits into a single mbuf, and b) the packet
is 32-bit aligned within the mbuf's data area. The presence of only
four descriptor registers means that we can never have more than four
packets queued for transmission at any one time.
Reception is not much better. The driver has to allocate a single large
buffer area (up to 64K in size) into which the chip will DMA received
frames. Because we don't know where within this region received packets
will begin or end, we have no choice but to copy data from the buffer
area into mbufs in order to pass the packets up to the higher protocol
levels.
It's impossible given this rotten design to really achieve decent
performance at 100Mbps, unless you happen to have a 400Mhz PII or
some equally overmuscled CPU to drive it."
Afaik 10 years later 1Gbit RTL8111B required alignment on 256 byte boundaries so not much better.
there is no PCI-e through USB though, other than Thunderbolt/USB4 or is there?
so if you only have USB ports and care about performance the bigger distinction would be if the USB ethernet device implements CDC-NCM or just CDC-ECM, with the distinction being that CDC-ECM sends the frames to the driver one-by-one and the driver has to acknowledge and process them one-by-one which generates ton of CPU work, while the newer CDC-NCM protocol sends frames in batches...,
on my laptop I can still get full gigabit speeds with a 1Gbit ECM dongle but when I do it uses 100% of one CPU core, while a 1Gbit NCM dongle has negligible CPU usage...
I'm guessing if I accidentally got a pci-e one, it wouldn't work in any of the USB ports I would connect it to (as, to my knowledge, I only have USB ports), or do they generally fall back to working as a USB device?
Basically, what do you get if you hotplug it into a laptop running a current linux kernel and do "sudo lsusb -v" vs "sudo lspci -v"?
The ones that are native PCIE devices offer much better performance, up to 2.5 GBASET line rate, and will communicate with the host over the implementation of thunderbolt over USB.
The ones that are USB only might work okay, but there's a reason they're cheap.
Of course a cheaper laptop also won't have any implementation of thunderbolt on it, so that's something to consider as well.