|Having trouble viewing this email? Click here|
NVMe added TCP to its existing family of transports. Chelsio already supports JBOFs and provides excellent benefits with iWARP RDMA transport for NVMe-oF demonstrating latencies of less than 5 μs between remote and local SSD access. iWARP continues to provide a more robust fabric choice relative to RoCE for this purpose. With its unique TOE capability, Chelsio's NVMe/TCP provides exceptional performance and isolation of host application performance from network jitter. For this reason, T6 is an ideal JBOF solution: time to revenue today with iSCSI, and with NVMe/TCP, NVMe-oF, and iSER when they are fully adopted, with no software/hardware update on T6.
Chelsio T6 Unified Wire adapters, delivering unprecedented performance are being shipped with NetApp AFF A800, A700 all-flash, FAS9000 storage systems, Dell-EMC SCv3000, SC5020/SC5020F, SC7020/SC7020F and SC9000 storage arrays, and IBM Storage SV1, V5000, and the V7000 series high performance systems.
Chelsio announced that Axellio selected T6 100GbE iWARP adapters for its FabricXpress® Windows Server Software-Defined (FX-WSSD) solutions, an all-flash hyper-converged infrastructure (HCI) appliance based on Microsoft Windows Server 2016 and 2019.
Chelsio CNA solution offers a comprehensive suite of concurrent storage (iSCSI, iSER, FCoE, NVMe-oF, S2D), compute and networking protocols offload. The $299, 2x10/25G cards provide CNA hardware offload initiator functions on Windows, Linux and ESXi. Multi-boot support and mesh/ring support (the ability to interconnect the adapters in small clusters without requiring an external switch) are also bundled in.
Chelsio Unified Wire and CNA Solutions enable incremental, non-disruptive server installs, and support the ability to work without requiring any discrete external network switch, delivering a brownfield strategy to enable high performance, low cost, scalable Windows Server Software-Defined (WSSD) deployments.
Microsoft published a blog on Storage Replica (SR) performance with low-latency, high-throughput, low-CPU 100G iWARP RDMA networking on Windows Server 2019.
· 1TB of data was replicated in 95 seconds.
· Windows Server 2019 provides upto 2-3X the replication rate of Server 2016 with new performance improvements to the SR log system.
Chelsio adapters power a range of below Windows capabilities:
· Storage Spaces Direct (S2D) software-defined storage
· Storage Replica for disaster recovery
· Network Direct for Windows HPC deployments
· Hardware offloaded iSCSI initiator
· Hardware offloaded iSER initiator
· Hardware offloaded FCoE initiator
· Hardware offloaded NVMe-oF initiator
· Client RDMA for Windows Clients
· d.VMMQ support
· NVGRE and VXLAN encapsulation offload
· Guest RDMA support
· SR-IOV for Virtual Environments
· Switchless configuration for WSSD installs
· DPDK to Windows for accelerating user-mode applications
Chelsio Unified Wire for AMD EPYC
Chelsio T6 solutions are now fully qualified with AMD EPYC platforms. The combination of the AMD EPYC server with Chelsio's adapters delivers compelling performance, power and total cost of ownership (TCO) advantages.
· L2 NIC
· TCP/IP offload with 1% CPU usage
· NVMe-oF offload with 2.4M IOPs and minimal CPU usage
· iSCSI offload with 2.7M IOPs and minimal CPU usage
· OVS offload with 47MPPS small packet processing rate
· S2D offload without requiring an external switch
· Chelsio released complete set of offload Initiator drivers (iSCSI, iSER, NVMe-oF and FCoE) for Windows, Linux, VMware.
· Chelsio released GPUDirect RDMA and Mesh drivers for Linux as part of the latest Unified Wire.
Chelsio added a number of News reports and white papers to its technical library, including:
· Chelsio published a paper comparing the two RDMA over Ethernet technologies- iWARP and RoCEv2. Considering several metrics, iWARP emerges as the best option for a variety of applications.
· Chelsio published a paper on configuring NVMe-oF JBOF using 100G iWARP RDMA fabric. With 98 Gbps throughput and 2.5M IOPs , T6 iWARP RDMA provides cost-effective, low latency, high performance network to access remote NVMe storage devices.
· Chelsio published performance results for SPDK NVMe-oF with 100G iWARP RDMA fabric. With only 4.7 μs delta latency between remote and local storage, T6 iWARP RDMA is the ideal choice for user mode storage applications.
· Chelsio published preliminary results of 100G NVMe/TCP results, delivering line-rate throughput, 2.6M IOPS and only 12 μs delta latency between remote and local storage.
· Axellio demonstrated an outstanding 4.3 Million IOPs using a four node WSSD cluster on Windows Server 2019 using Storage Spaces Direct (S2D) technology and Chelsio 100Gb iWARP RDMA adapters.
· Chelsio published S2D performance results using its 25GbE Converged Network adapters in a 3-node cluster setup. The bandwidth exceeded 36 GB/s which is very impressive for a 3-node cluster. IOPs reached 917K using 60 virtual machines, which means 14,000+ IOPs per virtual machine. T6 CNA solution enables cost-effective S2D deployments.
· Chelsio published a quick start guide for a Windows Storage Spaces Direct installation without a switch in a Mesh, or a Ring configuration, thus drastically reducing costs in a typical 3-5 node install.
· Chelsio published results of NFS over RDMA performance using iWARP. The paper compares the results with a regular NIC highlighting the advantages of RDMA.
· Chelsio published an analysis of savings in CPU acquisition and operational costs as a result of using an offloaded NIC. For typical fixed applications, the savings could be in thousands of dollars, while increasing system performance.
· Demartek published a benchmark of iSCSI offload vs. no offload for a class of real world applications. The results clearly demonstrated the necessity for offload for 100Gb iSCSI.
· TCP/IP & SR-IOV in virtual enviornments.
In the Wild
NVMe-oF with iWARP and NVMe/TCP - A Chelsio presentation at NVMe Developer Days Conference 2018.
NetApp All-flash filers deliver record breaking performance - Using AFF A800 arrays and Chelsio 100GbE iWARP RDMA adapters.
400 Gbps Disk-to-Disk WAN file transfer - NASA HECN team demonstrates using NVMe-oF and Chelsio iWARP RDMA.
NFS/RDMA over iWARP - A Chelsio Presentation at SDC.
NetApp A800 benchmark - Benchmark results using NetApp A800 all-flash storage systems having T6 adapters.
Comparing the RDMA technologies - An SNIA webcast comparing iWARP and RoCE.
RoCE vs. iWARP - An SNIA Q&A blog from the RoCE vs. iWARP webcast.
Comparing the storage technologies - An SNIA webcast on FCoE, iSCSI and iSER.
FCoE vs. iSCSi vs. iSER - An SNIA Q&A blog from the FCoE vs. iSCSI vs. iSER, a great storage debate.
Benefits of RDMA in Accelerating Ethernet Storage - A Q&A on different RDMA alternatives.
- S2D deployment using Chelsio iWARP RDMA.
Creative Advertising Group Deployment of Windows 10 Workstations and Windows Server 2016 Storage Spaces Direct Using Chelsio iWARP Yields Higher Performance and Lower Cost - Microsoft Success Story
Hyper-Converged Infrastructure Improves Health IT ROI - Providing full support for hyper-converged S2D deployments without requiring Top-of-Rack switches to support DCB capabilities.
Chelsio Storage over IP Enable Data Infrastructures - Addressing the various application and workload demands by enabling more robust, productive, effective and efficient data infrastructures.
· See a complete list of recent Chelsio newsletters here.
· See a complete list of recent Chelsio white papers .
· See a complete list of Chelsio performance reports .
· See a complete list of Competitive Analysis reports .
· See a complete list of Case Studies here.
· See a complete list of Videos here.
· See a complete list of Webinars here.
· See a complete list of Blogs here.