NVIDIA ConnectX-3 based NIC PCIe 3.0 x8 Dual Port 40GbE Open QSFP+ FDR IFB
Buy through a distribution partner
Powered by
NVIDIA ConnectX-3 based NIC PCIe 3.0 x8 Dual Port 40GbE Open QSFP+ FDR IFB
PCIe x8 ConnectX-3 Pro FDR InfiniBand Dual-Port 40/56GbE FDR InfiniBand Adapter Card with Virtual Protocol Interconnect (VPI) supports InfiniBand and Ethernet connectivity with a hardware offload engine to the overlay network ("tunnelling"). This is useful in public and private cloud clustered databases, parallel processing, transactional services and high performance embedded I/O applications.
Embedded I/O applications will realize significant performance improvements, resulting in the most flexible interconnect solution that reduces completion time and cost per operation. ConnectX-3 Pro improves network performance by increasing available bandwidth while decreasing the associated transport load on the CPU especially in virtualized server environments.
System Requirements
- FreeBSD, Linux, VMWare ESXi
- Win-server2008 R2/ Win-server2012 R2/ Win-server2016/Win-server2019
- Windows: 7/8/8.1/10 32/64bit
- One available PCI Express x8/x16 slot
Specification
- PCIe host interface specification v3.0 x8 and Complete with PCIe2.0 and 1.1
- Two 40 Gigabit QSFP+ Ethernet ports
- Compliant with QSFP+ MSA Spec Rev 1.0
- Two QSFP ports supporting FDR-14 InfiniBand or 40Gb Ethernet
- Support for InfiniBand FDR speeds of up to 56Gbps (auto-negotiation FDR-10, DDR and SDR)
- Low-profile form factor adaptor with 2u bracket
- Virtual protocol interconnect (VPI)
- InfiniBand Architecture specification v1.2.1 compliant
- IEEE Std. 802.3 compliant
- Compliant with copper cables and optical cables with the use of QSFP connectors.
- Support for SFP+ cables available through QSA (Quad to Serial)
- CORE-Direct® application off load
- GPUDirect application off-load
- RDMA over converged Ethernet (RoCE)
- End-to-End QoS and congestion control
- TCP/UDP/IP stateless off-load
- Ethernet encapsulation (EoIB)
- SR-IOV support; 16 virtual functions supported by KVM and Hyper-V (OS dependant) up to a maximum 127 virtual functions supports by the adaptor
- Enables low latency RDMA over 40G Ethernet (supported with both non-virtualized and SR-IOV enable virtualized servers) - latency as low as 1us
- Traffic steering across multiple cores
- Microsoft VMQ/VMware Net Queue support
- Industry-leading throughput and latency performance
- Legacy and UEFI PXE network boot support
- Supports iSCSI as a software iSCSI initiator in NIC mode with NIC driver
- Supports Operation Systems: FreeBSD, Linux5.x and above, VMware, Windows server 2008/2012/2016/2019, win7/win8/win8.1/win10 32 or 64bit
-
Form Factor
-
Connector Type
-
Data Rate
-
Storage Temperature
-
Bracket Height
-
Bus Type
-
Bus Width
-
Chipset
-
Protocol
-
Link Rate
-
Number of Ports
-
RDMA
-
Compatible OEM
-
Operating Temperature Range
-
Safety & Environmental Standards
-
Warranty
-
Dimensions
-
ECCN
Payment & Security
Your payment information is processed securely. We do not store credit card details nor have access to your credit card information.





