Arista®, Cisco®, Juniper® and more

Explore Our Full Range of 400G Optical Transceiver Modules and Cables

InfiniBand - Technology Overview

InfiniBand is a high performance networking technology designed for environments where low latency and high bandwidth are critical, such as high performance computing (HPC), artificial intelligence, and large scale data centres.

InfiniBand networks operate on a high speed switched fabric topology, enabling data to travel through multiple paths, enhancing reliability and performance. This architecture provides extremely low latency, with data transmission times as low as 600ns end-to-end. It also accommodates a range of data rates from 10Gbps to as high as 800Gbps. 63 of the top 100 fastest supercomputers use InfiniBand technology as it is one of the fastest interconnect options on the market.

A key feature of InfiniBand is Remote Direct Memory Access (RDMA), which allows data to be transferred directly from one system's memory to another without CPU intervention. This significantly reduces latency and offloads some of the processing from the CPU, increasing performance in applications requiring large data exchanges and real time communication, such as AI and machine learning training clusters.

InfiniBand is essential for scalable systems, supporting thousands of nodes and making it a top choice for supercomputing clusters and large data centres. However, InfiniBand infrastructure is more specialised and costly than traditional Ethernet, both in terms of hardware and expertise requirements, so would not be suitable for less demanding network architectures.

With InfiniBand technology becoming increasingly popular in global computing networks, demand for specialised high performance components has also risen to handle its unique requirements. Fibre optic transceivers and cabling technology with very high data rate capabilities are now necessary in most high end computing infrastructures. This is due to InfiniBand’s prominence in intensive compute and storage solutions, as well as its use in high performance computing and AI applications. As InfiniBand technology pushes towards higher speeds with HDR (High Data Rate) and NDR (Next Data Rate) standards, manufacturers are driven to produce more advanced optical components that meet InfiniBand’s performance requirements. Additionally, InfiniBand’s support for RDMA means that transceivers must also support RDMA compatible, low latency data paths which is not standard in regular Ethernet technologies.

Overall, InfiniBand’s high speed, low latency capabilities position it as a critical technology in applications where maximum network performance is essential, though it remains a specialised solution compared to more general networking technologies.

See our Tech Talk post for more information on InfiniBand technology and its relevance in the future of high end computing.