I have VMs running on bare-metal instances. Each bare-metal instance is in a separate rack by design (for fault tolerance). The bandwidth is 25GbE however, the response time between the hosts is so high that I need multiple streams to consume that bandwidth.
Compared to my local on-prem lab I need many more streams to get the observed throughput close to the theoretical bandwidth of 25GbE
# iperf Streams | AWS Throughput | On-Prem Throughput |
1 | 4.8 Gbit | 21.4 Gbit |
2 | 9 Gbit | 22 Gbit |
4 | 18 Gbit | 22.5 |
8 | 23 Gbit | 23 Gbit |
1 comment on “Cross rack network latency in AWS”
As documented here: https://aws.amazon.com/blogs/aws/the-floodgates-are-open-increased-network-bandwidth-for-ec2-instances/
Multiple flows are required, effectively one per 5gbps for a link to reach higher speeds. There may also be latency at play, but even cluster placement requires a flow per 10gbps. It’s one of those annoying gotchas that aren’t well-documented in something other than blog posts and powerpoints. In-region latency should still allow this type of scaling, not just in-AZ or in-placement-group.
But yes, it’s limited to 5gbps per flow, which is why you see the scaling you do.