MFS1S50-H020E infiniband HDR 200GB鍒嗗嚭2鍊�100GB涓€鍒嗕簩IB鍏夌簻鎵圭櫦(f膩)锛屽師瑁濆拰鍏煎鍧囨湁鐝�(xi脿n)璨�銆俆EL:13910098771
MFS1S50-V030E,MFS1S50-H015E,MFS1S50-H020E,MFS1S50-H005E,MFS1S50-H010E鍏夌簻鎵圭櫦(f膩)NVIDIA.Mellanox閭佺怠(lu貌)鎬濆厜绾栦氦鎻涙(j墨)NVIDIA.Mellanox閭佺怠(lu貌)鎬濈恫(w菐ng)绲�(lu貌)浜ゆ彌姗�(j墨)Mellanox閭佺怠(lu貌)鎬濆厜绾栫恫(w菐ng)鍗CA鍗ellanox閭佺怠(lu貌)鎬濆厜绾栨ā濉奙ellanox閭佺怠(lu貌)鎬濈窔绾淢ellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)闆绘簮Mellanox閭佺怠(lu貌)鎬濋妳绾淢ellanox閭佺怠(lu貌)鎬濆厜绾淚nfiniband浜ゆ彌姗�(j墨)IB浜ゆ彌姗�(j墨)绛夋暣姗�(j墨)鍙婇厤浠舵壒鐧�(f膩)
MFS1S50-H030E Mellanox active fiber splitter cable, IB HDR, 200Gb/s to 2x100Gb/s, QSFP56 to 2xQSFP56 , LSZH, 30m銆俰nfiniband HDR 200GB鍒嗗嚭2鍊�100GB涓€鍒嗕簩缍�(w菐ng)绲�(lu貌)绶氱簻锛堜氦鎻涙(j墨)绔柈鍙�200gbps锛屼富姗�(j墨)绔洐鍙�100Gbps锛�30M闀凤級MFS1S50-V030E,MFS1S50-H015E,MFS1S50-H020E,MFS1S50-H005E,MFS1S50-H010E鍏夌簻鎵圭櫦(f膩)NVIDIA.Mellanox閭佺怠(lu貌)鎬濆厜绾栦氦鎻涙(j墨)NVIDIA.Mellanox閭佺怠(lu貌)鎬濈恫(w菐ng)绲�(lu貌)浜ゆ彌姗�(j墨)Mellanox閭佺怠(lu貌)鎬濆厜绾栫恫(w菐ng)鍗CA鍗ellanox閭佺怠(lu貌)鎬濆厜绾栨ā濉奙ellanox閭佺怠(lu貌)鎬濈窔绾淢ellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)闆绘簮Mellanox閭佺怠(lu貌)鎬濋妳绾淢ellanox閭佺怠(lu貌)鎬濆厜绾淚nfiniband浜ゆ彌姗�(j墨)IB浜ゆ彌姗�(j墨)绛夋暣姗�(j墨)鍙婇厤浠舵壒鐧�(f膩)
200Gb/s HDR to 2x100Gb/s HDR100 QSFP56 to 2xQSFP56 MMF Active Optical Splitter Cable
NVIDIA Networking MFS1S50-H030E is a QSFP56 VCSEL-based (Vertical Cavity Surface-Emitting Laser), cost effective 200Gb/s to 2 x 100Gb/s active optical breakout cable (AOC) designed for use in 200Gb/s InfiniBand HDR (High Data Rate) systems.
The MFS1S50-HxxxE cable is compliant with SFF-8665 for the QSFP56 pluggable solution. It provides connectivity between system units with a 200Gb/s connector on one side and two separate 100Gb/s connectors on the other side, such as a switch and two servers. The cable connects data signals from each of the 4 MMF (Multi Mode Fiber) pairs on the single QSFP56 end to the dual pair of each of the QSFP56 multiport ends. Each QSFP56 end of the cable comprises an EEPROM providing product and status monitoring inbation, which can be read by the host system.
Rigorous production testing ensures the best out-of-the-box installation experience, perbance and durability.
NVIDIA&锛�39;s unique quality active fiber cable solutions provide power-efficient connectivity for data center interconnects. They enable higher port bandb, density and configurability at a low cost, and reduced power requirement in the data centers.
Highlights
Supports IBTA InfiniBand HDR
200Gb/s HDR to 2x100Gb/s HDR100 data rate
4x 50Gb/s PAM4 modulation
Programmable Rx output amplitude and pre-emphasis
SFF-8665 compliant QSFP56 port
Single 3.3V power supply
4.35W power dissipation (typ., 200G end)
Bit Error Rate (BER) better than 1E-15
Up to 30m length
Hot pluggable
RoHS compliant
SFF-8636 compliant I2C management interface
NVIDIA MCX512A-ACUT ConnectX-5 EN Adapter Card 10/25GbE Dual-Port SFP28 PCIe 3.0 x8 UEFI Enabled x86 ARM Tall Bracket鍏夌簴缍�(w菐ng)鍗VIDIA MCX512A-ADAT ConnectX-5 Ex EN Adapter Card 25GbE Dual-Port SFP28 PCIe 3.0/4.0 x8 Tall Bracket鍏夌簴閬╅厤鍗VIDIA MCX512A-ACAT ConnectX-5 EN Adapter Card 10/25GbE Dual-Port SFP28 PCIe 3.0 x8 Tall Bracket ROHS R6 HCA鍗B鍗VIDIA MCX512F-ACAT ConnectX-5 EN Adapter Card 25GbE Dual-Port SFP28 PCIe 3.0 x16 Tall Bracket Infiniband浜ゆ彌姗�(j墨)鍏夌簴缍�(w菐ng)鍗VIDIA.Mellanox閭佺怠(lu貌)鎬濆厜绾栦氦鎻涙(j墨)NVIDIA.Mellanox閭佺怠(lu貌)鎬濈恫(w菐ng)绲�(lu貌)浜ゆ彌姗�(j墨)Mellanox閭佺怠(lu貌)鎬濆厜绾栫恫(w菐ng)鍗CA鍗ellanox閭佺怠(lu貌)鎬濆厜绾栨ā濉奙ellanox閭佺怠(lu貌)鎬濈窔绾淢ellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)闆绘簮Mellanox閭佺怠(lu貌)鎬濋妳绾淢ellanox閭佺怠(lu貌)鎬濆厜绾淚nfiniband浜ゆ彌姗�(j墨)IB浜ゆ彌姗�(j墨)绛夋暣姗�(j墨)鍙婇厤浠舵壒鐧�(f膩)
Features
Tag matching and rendezvous offloads
Adaptive routing on reliable transport
Burst buffer offloads for background checkpointing
NVMe over Fabric offloads
Backend switch elimination by host chaining
Embedded PCIe switch
Enhanced vSwitch/vRouter offloads
Flexible pipeline
RoCE for overlay networks
PCIe Gen 4.0 support
RoHS compliant
ODCC compatible
Various b factors available
Benefits
Up to 100Gb/s connectivity per port
Industry-leading throughput, low latency, low CPU utilization and high message rate
Innovative rack design for storage and Machine Learning based on Host Chaining technology
Smart interconnect for x86, Power, Arm, and GPU-based compute & storage platbs
Advanced storage capabilities including NVMe over Fabric offloads
Inbligent network adapter supporting flexible pipeline programmability
Cutting-edge perbance in virtualized networks including Network Function Virtualization (NFV)
Enabler for efficient service chaining capabilities
Efficient I/O consolidation, lowering data center costs and complexity
Mellanox MCX512A-ACAT ConnectX-5 EN Network Interface Card 10/25GbE Dual-Port SFP28 PCIe3.0 x8 Infiniband浜ゆ彌姗�(j墨)鍏夌簴鍗�Mellanox MCX512F-ACAT ConnectX-5 EN Network Interface Card 25GbE Dual-Port SFP28 PCIe3.0 x16 Tall Bracket HCA鍗�Mellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)Mellanox閭佺怠(lu貌)鎬濆厜绾栭仼閰嶅崱HCA鍗�Mellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)闆绘簮绛夋暣姗�(j墨)鍙婇厤浠舵壒鐧�(f膩)
Mellanox MCX511F-ACAT ConnectX-5 EN Network Interface Card 25GbE Single-Port SFP28 PCIe3.0 x16 IB鍗�Mellanox MCX512A-ACAT ConnectX-5 EN Network Interface Card 10/25GbE Dual-Port SFP28 PCIe3.0 x8 Infiniband浜ゆ彌姗�(j墨)鍏夌簴鍗�Mellanox MCX512F-ACAT ConnectX-5 EN Network Interface Card 25GbE Dual-Port SFP28 PCIe3.0 x16 Tall Bracket HCA鍗�Mellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)Mellanox閭佺怠(lu貌)鎬濆厜绾栭仼閰嶅崱HCA鍗�Mellanox閭佺怠(lu貌)鎬濅氦鎻涙(j墨)闆绘簮绛夋暣姗�(j墨)鍙婇厤浠舵壒鐧�(f膩)
Technology:
Ethernet
Max Speed:
25GbE
Ports:
Single
PCI:
PCIe3.0 x16
Connector Type:
SFP28/SFP+/SFP
Condition:
New
ECCN:
5A991
Mellanox MCX353A-FCCT ConnectX-3 40GB FDR IB浜ゆ彌姗�(j墨)钀厗鍏夌簴缍�(w菐ng)鍗�Mellanox MCX353A-FCBT 40G浠ュお缍�(w菐ng)/56G IB HCA鍗�
Mellanox ConnectX-3 MCX353A-FCBT QSFP FDR IB 56Gb/s & 40/56GbE PCIe3.0 x8
MCX353A-FCBT,MCX354A-FCBT,MCX354A-QCBT,MCX353A-QCBT,MHQH29B-XTR,MHQH19B-XTR绛塎ellanox,Infiniband,Card,Mellanox浜ゆ彌姗�(j墨)IB鍗ellanox浜ゆ彌姗�(j墨)鎵圭櫦(f膩)TEL:13910098771