I’ve noticed recently that my network speed isn’t what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I’ve tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I’d expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I’d expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I’m using VirtIO network drivers, so I should realistically only be limited by CPU; it’s all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I’m not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn’t get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it’s important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

  • @corroded@lemmy.worldOP
    link
    fedilink
    27 months ago

    Every VM is using VirtIO as the network card; they’ll all on the same bridge to the physical 10Gb NIC. As far as I understand, any traffic between VMs should not be leaving the Proxmox server.

    • chiisanaA
      link
      17 months ago

      Sorry I missed that part. Reading in bed is not my forte. I’ve deleted my comment because of my mistake.

      • @corroded@lemmy.worldOP
        link
        fedilink
        17 months ago

        It was a good suggestion. That’s one of the first things I checked, and I was honestly hoping it would be as easy as changing the NIC type. I know that the Intel E1000 and Realtek RTL8139 options would limit me to 1Gb, but I haven’t tried the VMware vmxnet3 option. I don’t imagine that would be an improvement over the VirtIO NIC, though.

      • @corroded@lemmy.worldOP
        link
        fedilink
        17 months ago

        What do you mean specifically? If I’m already testing between two VMs, doesn’t that already isolate any issues to Proxmox? Is there another performance metric you think I should be looking at?

        • Possibly linux
          link
          fedilink
          English
          17 months ago

          It will tell you if the virtualization is the bottle neck. It is actually pretty easy to mount a smb share in proxmox you just need to open up the shell and mount it. You can use dd to test sequential speed.