ATGBICS MUX/DEMUX Explore Our Expanded Range of CWDM & DWDM MUX/DEMUX Solutions!

InfiniBand: The High-Performance Network Protocol for Today’s Computing

InfiniBand: The High-Performance Network Protocol for Today’s Computing

InfiniBand is a network protocol designed for high-bandwidth, low-latency applications, extensively used in server and storage environments that necessitate fast data transfer, typically data centres and high-performance computing (HPC) clusters.

InfiniBand Key features

High Bandwidth and Low Latency

InfiniBand is suitable for applications that require high-bandwidth interconnects with minimal communication delays. It can support wide data rates from 10Gbps up to 800Gbps per link in 2024 with a roadmap of 1600Gbps post 2026.
63 of the top 100 fastest supercomputers utilise InfiniBand technology, with measurements for latency as low as 600ns end-to-end. 

Scalability

InfiniBand is becoming vital for developing scalable systems that support thousands of nodes. Unlimited clusters can be achieved using InfiniBand routers.

InfiniBand uses a switched fabric topology, enabling data to travel through multiple paths, enhancing reliability and performance.

Remote Direct Memory Access (RDMA)

InfiniBand uses RDMA technology for data transfer. RDMA allows data to be directly transferred between the memory of remote systems, GPUs, and storage, bypassing the CPUs of those systems. This facilitates high-speed, low-latency network data transfers.

Transport Protocols

InfiniBand supports multiple transport protocols, including IP over InfiniBand (IPoIB) and SCSI over InfiniBand (SRP). This versatility accommodates various data transmission needs.

Use Cases

InfiniBand is commonly used in environments demanding high performance and low latency, such as:

  • Supercomputing and HPC clusters
  • Data centres, data mining and cloud computing
  • Enterprise storage networks
  • Financial services for fast trading systems
  • Artificial Intelligence and Machine Learning workloads
  • Bioscience and drug research

InfiniBand Evolution and Roadmap

InfiniBand technology is continually evolving. Over the last 10 years, transfer speeds have exponentially increased, from 100Gbps in 2015 to 800Gbps today. The below roadmap indicates a quadrupling of capacity by 2030.

InfiniBand Evolution and Roadmap

 

What is the difference between Ethernet and InfiniBand?

Ethernet networks traditionally connect multiple computers and devices such as printers within a local area network. Network connection can be wired or wireless.

Typical Ethernet Network Types include Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet.

Ethernet networking remains suitable for lower end applications where bandwidth demand is low and high speed is not critical such as enterprise networking.

Example of an Ethernet Network

InfiniBand is an “industry-standard specification that defines an input/output architecture used to interconnect servers, communications infrastructure equipment, storage and embedded systems.” (IBA) Up to 64,000 addressable devices are supported.

InfiniBand can connect multiple data streams in a single connection, with thousands of interconnected nodes. Each InfiniBand unit is called a ‘subnet’. Multiple subnets are connected by routers to form a larger InfiniBand network.

InfiniBand offers superior performance and efficiency over Fibre Channel and Ethernet and is likely to become deployed as standard within environments requiring high speed and low latency such as datacentres and high-performance computing environments where large-scale data processing and frequent inter-node communication are critical.

Example of an InfiniBand Network

Industry Support

InfiniBand is backed by the InfiniBand® Trade Association (IBTA) which defines standards and ensures interoperability between different vendors' equipment. The association is steered by a committee made of over 40 companies, including HPE, Intel and NVIDIA.

This trade association assures that the InfiniBand specification is being further developed year after year through continued investment and that the technology still has far to go in its capabilities.

For further information visit https:www.infinibandta.org

ATGBICS Solutions

ATGBICS supplies the below range of InfiniBand speeds

 

InfiniBand datarate table

  • QDR InfiniBand provides a 40 Gbps link. QDR InfiniBand may be used as 10 Gbps or 40 Gbps Ethernet.
  • FDR10 InfiniBand provides a physical connection of 56Gbps but the actual speed of transfer is 40 Gbps. Because FDR-10 uses the same data encoding as legacy InfiniBand speeds, for every 10 bits transmitted, 8 bits are data, reducing the actual speed of transfer to 40Gbps. FDR-10 can be used as 10 Gbps or 40 Gbps Ethernet.
  • FDR InfiniBand provides a connection of 56.25Gbs. Each lane of a 4x port runs a bit rate of 14.0625Gbs with 64b66b encoding, resulting in an effective bandwidth of 56.25Gbs.
  • EDR InfiniBand provides a connection of 100Gbps. Each lane of a 4x port runs a bit rate of 25Gbs with 64b66b encoding, resulting in an effective bandwidth of 100Gbs.
  • HDR InfiniBand provides a connection of 200Gbps. Each lane of a 4x port runs a bit rate of 50Gbs with 64b66b encoding, resulting in an effective bandwidth of 200Gbs.
  • NDR InfiniBand provides a connection of 400Gbps. Each lane of a 4x port runs a bit rate of 100Gbs with 64b66b encoding, resulting in an effective bandwidth of 400Gbs.

NVDIA Mellanox is one of the leading suppliers of InfiniBand products, and ATGBICS offers an extensive portfolio of Nvidia Mellanox® compatible transceivers, direct attach, and active optical cables, with over 600 SKUs available supporting both Ethernet & InfiniBand.

Our Nvidia Mellanox compatible products are designed to support various industries, even in the harshest environments. With our comprehensive range of compatible alternatives, we ensure that we can meet your client's demands with fast delivery.

ATGBICS vendor agnostic range of Ethernet & InfiniBand products are specifically designed for use in ‘open standard’ platforms that do not have specific firmware coding requirements for compatibility with a particular brand of networking equipment.

View our NVIDIA Mellanox compatible products.