Arista®, Cisco®, Juniper® and more

Explore Our Full Range of 400G Optical Transceiver Modules and Cables

The Difference between InfiniBand and Ethernet

The Difference between InfiniBand and Ethernet

Recent demand for large scale computing solutions has surged, driven by rapid advancements in AI, high performance computing (HPC), and data analytics. This has dramatically increased global requirements for high speed, high bandwidth and low latency communications within machines, and between large scale clusters. The choice between Ethernet and InfiniBand depends on the specific application with latency, bandwidth, scalability, and cost being critical factors to be considered.

Ethernet

Ethernet is widely used for general purpose computing, cloud and virtualisation environments, and storage area networks. Designed to maximise the ease of information flow between multiple systems, Ethernet is broadly supported and extensively standardised. Recent advancements, such as RDMA over Converged Ethernet (RoCE), have further enhanced Ethernet's capability to handle more demanding workloads, although it remains most effective in simpler environments. Typical Ethernet types include Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet.


InfiniBand

InfiniBand is tailored to meet the requirements of HPC clusters, where large scale data processing and frequent inter-node communications are critical. Common applications include scientific research, AI model training and database analytics requiring low latency communication. Although initial deployment costs can be higher, InfiniBand excels in high speed, reliable data transfer. Its latest standards like HDR (High Data Rate), NDR (Next Data Rate) and XDR (Extended Data Rate) accommodate 200Gbps, 400Gbps and 800Gbps speeds respectively. InfiniBand applications are more specialised compared to Ethernet, making it common in targeted high performance applications rather than general purpose uses.

The first InfiniBand standard, SDR (Single Data Rate) achieved speeds of 10Gbps over four links with 2.5Gbps signals per lane. Since then, there have been many more such as QDR (Quad Data Rate), FDR (Fourteen Data Rate) and EDR (Enhanced Data Rate). The most recent standards (HDR, NDR and XDR) utilise PAM4 modulation to reach even higher speeds. These developments have exponentially increased the performance of InfiniBand technology. Although InfiniBand still leads the way in ultra low latency requirements, Ethernet has also drastically improved in recent years with 400GbE and 800GbE standards pushing data rates to be on par with InfiniBand.

Latency

Latency is another key difference between these two technologies. Ethernet switches often use store-and-forward processing and MAC table lookups, which, while versatile, introduce delays. InfiniBand employs cut-through switching and simplified 16-bit LID addressing, reducing forwarding delays to less than 100ns. These features make InfiniBand particularly suited to HPC workloads where minimizing latency is essential. Advanced Ethernet protocols like RoCE and Priority Flow Control (PFC) narrow the latency gap but typically cannot match InfiniBand's inherent efficiency. The processing flow of Ethernet switches tends to be longer, because complex services such as IP, MPLS, and QinQ must be considered.

Network Reliability

InfiniBand’s end-to-end flow control and lossless data transmission ensures high reliability which is critical in high end environments where packet loss can significantly degrade performance. Ethernet, while not inherently lossless, can achieve near lossless operation with configurations like PFC and Data Centre Bridging (DCB), making it increasingly viable for more demanding applications.

Cost & Power Considerations

InfiniBand switches tend to be more power efficient for high performance workloads due to their compact chip designs. The size and efficiency of these chips stems from their lossless data transmission capabilities preventing congestions and packet loss at the network level.

Ethernet switches are practical for large scale applications and are often more cost effective in broader deployments. However, advanced Ethernet designs require larger chip areas to accommodate buffer spaces for handling burst traffic. They must temporarily store excess packets, requiring up to tens of megabytes to store incoming traffic before forwarding. This added complexity increases the cost and power consumption of Ethernet switches, particularly as bandwidth approaches InfiniBand speeds.

Summary

In conclusion, InfiniBand excels in environments requiring low latency, high bandwidth and reliable communication, particularly in HPC, AI and large scale scientific reserach. Ethernet, however, remains the backbone of general purpose networking, offering cost effective and versatile solutions across a wide range of industries. While InfiniBand continues to lead in specialised, high-performance applications, advancements in Ethernet technologies are closing the gap, ensuring both remain integral to modern computing.