Socket Direct®

Maximize Data Center Performance and Increase ROI

An innovative network adapter architecture—NVIDIA® Mellanox® Socket Direct®—enables direct PCIe access to multiple CPU sockets, eliminating the need for network traffic having to traverse the inter-process bus. This optimizes overall system performance and maximum throughput for the most demanding applications and markets.

Eliminate Traffic Bottlenecks

Based on NVIDIA Mellanox  Multi-Host® technology, NVIDIA Mellanox Socket Direct technology enables several CPUs within a multi-socket server to connect directly to the network, each through its own dedicated PCIe interface. Through either a connection harness that splits the PCIe lanes between two cards or by bifurcating a PCIe slot for a single card. This results in eliminating the network traffic traversing over the internal bus between the sockets, significantly reducing overhead and latency, in addition to reducing CPU utilization and increasing network throughput. Mellanox Socket Direct also improves Artificial Intelligence and Machine Learning application performance, as it enables native GPU-Direct® technologies.

Learn More
SOCKET DIRECT ADAPTERS

HIGHLIGHTS

LATENCY IMPROVEMENT

80% Reduction

CPU UTILIZATION

50% Less

THROUGHPUT

16-28% improvement

NETWORK PROTOCOLS

Ethernet/InfiniBand

BENEFITS

Flexible Form Factors Across Multiple Data Speeds

Flexible Form Factors Across Multiple Data Speeds

– ConnectX-6 Dx Multi-Host OCP 3.0 cards can connect a 200GbE port to up to 4 PCIe Gen4 x4 slots
– ConnectX-6 Socket Direct cards provide HDR 200Gb/s or 200GbE ports over two PCIe Gen3 x16 slots
– ConnectX-6 OCP3.0 cards provide HDR 200Gb/s or 200GbE ports to up to 4 PCIe Gen4 x4 slots
– ConnectX-5 Socket Direct cards provide EDR 100Gb/s or 100GbE transmission rate over two PCIe Gen3 x8 slots

Enhanced Performance That is Easy to Manage

Enhanced Performance That is Easy to Manage

Socket Direct adapters can be connected to a BMC using MCTP over SMBus, or MCTP over PCIe, similar to a standard NVIDIA PCIe stand-up adapter. The chosen management interface facilitates communication between the platform management and subsystem component and the Socket Direct adapters can then be configured transparently by the chosen server management solution.

Socket Direct Removes the Load on the Inter-processor Bus

Socket Direct Removes the Load on the Inter-processor Bus

Socket Direct technology utilizes the same underlying technology that enables Multi-Host, only to different CPUs within the same server. When comparing the servers’ external throughput while applying the inter-processor load compared to when Socket Direct is implemented, throughput is improved by 16%-28% compared to the standard adapter connection to a single CPU.

MULTI-HOST AND SOCKET DIRECT COMPARISON

Connectors 400GbE Ports 200GbE Ports 100GbE Ports 50GbE Ports 40GbE Ports 25/10/1GbE Ports Height Max Throughput Total Packets per Second
SN4600C 64x QSFP28 100GbE 64 128** 64 128** 2U 12.8Tb/s 8.4Bpps
SN4700 32x QSFPDD 400GbE 32 64** 128** 128** 64** 128** 1U 25.6Tb/s 8.4Bpps
SN4410* 32x QSFPDD 400GbE 8 16** 24/48** 128** 64** 128** 1U 8Tb/s 8.4Bpps
SN4600* 64x QSFP56 200GbE 64 128** 128** 64 128** 2U 25.6Tb/s 8.4Bpps
SN4800* 128x QSFP28 100GbE 32 64 128 128** 64 128 4U 25.6Tb/s 8.4Bpps
SN4800* 32x QSFPDD 400GbE 32 64 128 128** 64 128 4U 25.6Tb/s 8.4Bpps
SN4800* 64x QSFP56 200GbE 32 64 128 128** 64 128 4U 25.6Tb/s 8.4Bpps

CONNECTX COMPARISON

ORDERING PART NO. MAX. SPEED PORTS CONNECTORS ASIC & PCI DEV ID PCI LANES
ConnectX-6 Dx MCX623435MN-CDAB 100GbE 1 QSFP56 ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
ConnectX-6 Dx Contact NVIDIA 100GbE 1 DSFP ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
ConnectX-6 Dx Contact NVIDIA 100GbE 1 QSFP56 ConnectX-6 Dx Socket Direct PCIe 4.0 x16, split into two x8 2x8 in a row
ConnectX-6 Dx Contact NVIDIA 200GbE 1 QSFP56 ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
ConnectX-6 VPI MCX653105A-EFAT HDR100, EDR IB (100Gb/s) and 100GbE 1 QSFP56 ConnectX-6 Socket Direct 3.0/4.0 x16, split into two x8 2x8 in a row
ConnectX-6 VPI MCX653106A-EFAT HDR100, EDR IB (100Gb/s) and 100GbE 2 QSFP56 ConnectX-6 Socket Direct 3.0/4.0 x16, split into two x8 2x8 in a row
ConnectX-6 VPI MCX653106A-EFAT HDR IB (200Gb/s) and 200GbE 1 QSFP56 ConnectX-6 Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card 2x16
ConnectX-6 VPI MCX654106A-HCAT HDR IB (200Gb/s) and 200GbE 2 QSFP56 ConnectX-6 Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card 2x16
ConnectX-5 VPI MCX556M-ECAT-S25 EDR IB (100Gb/s) and 100GbE 2 QSFP28 ConnectX-5 Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 25cm harness 2x8
ConnectX-5 VPI MCX556M-ECAT-S35A EDR IB (100Gb/s) and 100GbE 2 QSFP28 ConnectX-5 Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 35cm harness 2x8

*For Socket Direct Ethernet virtualization or Socket Direct dual-port use cases please contact NVIDIA Mellanox Customer support

Contact Our Team

If you need more information about our products do not hesitate to contact our dedicated team.

Contact Us