Chelsio Update, September 2018
|
|||||||||||||||||||||
With a comprehensive suite of offloaded storage, compute and networking protocols - including iWARP (RDMA/TCP), TCP/IP, UDP/IP, NVMe over Fabrics (NVMe-oF), iSCSI, iSER, FCoE, T10‐DIX, IPsec, TLS/SSL, DTLS, DPDK, OVS, SR-IOV, VxLAN, Traffic Management, and Classification - T6 enables network convergence, while dramatically increasing host system efficiency and lowering communication overhead. T6 adapters enable customers to concurrently improve performance and reduce system costs (~10% CPU reduction results in ~$1400 BOM reduction in a typical storage head and Chelsio's T6 generally recovers much more CPU than 10%). It is thus generally cheaper to design in Chelsio solutions, than it is to use any other NIC.
PartnersChelsio T6 Unified Wire adapters, delivering unprecedented performance in hybrid flash storage environments are shipped with NetApp's AFF A800, A700 all-flash and FAS9000 storage systems. Chelsio iWARP RDMA enables ease of deployment and interoperability with 100% of the switch infrastructure in the industry.
Chelsio adapters are shipped with Dell-EMC SCv3000, SC5020/SC5020F, SC7020/SC7020F and SC9000 storage arrays. T6 25Gb and 100Gb adapters delivering a low latency, low CPU utilization, high bandwidth, and high IOPs implementation of the iSCSI protocol enable highly efficient iSCSI storage solutions.
Chelsio T6 Unified Wire adapters are shipped with IBM Storage SV1, V5000, and the V7000 series high performance systems to enable a cost-effective, optimized operation expense, and flexible heterogeneous storage systems. T6 iSER offload leverages iWARP RDMA's performance superiority, zero copy, low latency and ease-of-use. Combined with iSER initiator drivers on VMware, Windows, and Linux, T6 offers a complete SAN storage fabric solution.
Chelsio announced Industry's first demonstration of Windows Server 2019 Storage Replica (SR) using 100 Gigabit iWARP RDMA at Microsoft Ignite 2018. It presents iWARP RDMA advantages using SR operating in synchronous mode over a 50Km fiber loop. Microsoft also published a blog on SR performance with ultrafast flash storage and iWARP RDMA networking on Windows Server 2016. The results show the advantages of SMB Direct & iWARP RDMA offloading.
• 2TB of data was replicated in 12 minutes over 25Gbps iWARP RDMA network. Only 10% of CPU was utilized, freeing the server to do its real job. • With 100Gbps iWARP RDMA network, the same data was replicated within 4 minutes.
Microsoft strongly recommends iWARP RDMA as the safer alternative for deploying Storage Spaces Direct (S2D) as it does not require any configuration of DCB on network hosts or network switches and can operate over the same distances as any other TCP connection.
• Microsoft recommends iWARP for S2D • Microsoft Recommendation on the RDMA alternatives in Windows • Hyper-converged solution using Storage Spaces Direct in Windows Server 2016
Chelsio Unified Wire and CNA Solutions power a range of Windows capabilities:
• Storage Spaces Direct (S2D) software-defined storage • Storage Replica for disaster recovery • SMB Direct for high performance file access • Network Direct for Windows HPC deployments • Hardware offloaded iSCSI initiator • Hardware offloaded iSER initiator • Hardware offloaded FCoE initiator • Hardware offloaded NVMe-oF initiator • Client RDMA for Windows Clients • d.VMMQ support • NVGRE and VXLAN encapsulation offload • Guest RDMA support • SR-IOV for Virtual Environments • Switchless configuration for WSSD installs • DPDK to Windows for accelerating user-mode applications
Chelsio adapters were certified and listed on Qualcomm AVL. Chelsio Unified Wire's leading-edge performance, security, and features combined with the Qualcomm Centriq 2400 processor based ARMv8 servers, enable a high performance, scale out, containerized, multi-tenant workloads in the datacenters.
New Products• Chelsio announced the availability of Converged Network Adapter (CNA) solution. It offers a comprehensive suite of concurrent storage, compute and networking protocol offload to enable NVMe-oF, iSER, SMBDirect, S2D, iSCSI and FCoE clients. This enables 256 concurrent offloaded connections and multi-boot support, on lowest priced T6 adapters, including OCP cards, with no card memory. • Chelsio released new Quad-port adapters, T540-LP-CR and T540-SO-CR available in low-profile form factors. T540-LP-CR is a Unified Wire, with hardware offload support of iSCSI, iSER, NVMe-oF, FCoE and iWARP RDMA. T540-SO-CR is a Server Offload adapter, with a full suite of stateless offloads, including LRO, LSO, RSS, virtualization and traffic management. • Chelsio released Unified Wire 3.9 drivers for Linux, with support for 248 Virtual Functions for T6 adapters and enhanced NVMe-oF performance. This open-source single driver, single Firmware package enables running a secured unified wire at 100Gbps speed. The updated firmware further enables operation of offload functions in a single chip memory free configuration. This enables the T6225-SO-CR and T6225-OCP cards to operate as CNA products, thus enabling a very cost effective CNA solution. Recent PublicationsChelsio added a number of News reports and white papers to its technical library, including: • Chelsio published a paper on Secure Data Replication at 100GbE with Windows Server 2019. T62100-CR adapter delivered 94 Gbps replication rate with only 5% CPU utilization and 3% of memory consumption. T6 iWARP RDMA is the best fit for enterprises and datacenters searching for a cost-effective and secure data recovery solution. • Chelsio published preliminary benchmark data of Windows NVMe-oF Initiator. With 98 Gbps line-rate throughput and 1M IOPS and concurrent support for iSCSI, iSER, FCoE initiators, Chelsio Converged Network Adapters are the best in class and well suited for Windows environments. • Chelsio published preliminary benchmark data of Windows NVMe-oF Initiator. With 98 Gbps line-rate throughput and 1M IOPS and concurrent support for iSCSI, iSER, FCoE initiators, Chelsio Converged Network Adapters are the best in class and well suited for Windows environments. • XIO demonstrated 4-node WSSD cluster using S2D and Chelsio T520-BT adapters. Cost-effective T5 iWARP RDMA adapters delivered 1.6 Million IOPs. • Chelsio published performance results with d.VMMQ in Windows Server 2019. Chelsio adapters provide seamless support for these new features, deliverling line-rate throughput even from the Virtual machines. • Chelsio published a paper on configuring NVMe-oF JBOF using 100G iWARP RDMA fabric. With 98 Gbps throughput, 2.5M IOPs and only 9 μs delta latency between remote and local storage, T6 iWARP RDMA provides cost-effective, low latency, high performance network to access remote NVMe storage devices. • Chelsio published a paper highlighting the performance advantages of OVS offload solution in an AMD EPYC 7551 based server. T6 scores 47 MPPS at challenging small I/O sizes with less than 1% CPU usage. T6 offload delivered 3x the performance of non-offload approach even with 10k flows. • Chelsio published NVMe-oF results on the AMD EPYC Server platform. T6 100G iWARP RDMA offload adapter delivered 98 Gbps line-rate performance and more than 2.4M IOPS. • Chelsio published iSCSI Offload results on the AMD EPYC Server platform. T6 100G iSCSI offload adapter delivered 98 Gbps line-rate performance and 1.5M IOPS for a cost-effective enterprise-class storage target solution built with volume, off-the-shelf hardware and software components. • Chelsio published FreeBSD TCP Offload Engine (TOE) results on the AMD EPYC Server platform. The T6 TOE with DDP solution provided line-rate 98 Gbps performance and exceptional CPU utilization numbers, only 1% for Receive and 4% for Transmit. TCP Offload is needed to realize the current 100Gbps and the forthcoming 200Gbps and 400Gbps bandwidths. • Chelsio T6 demonstrated 98 Gbps line-rate network throughput, providing the optimal and scalable networking building block for datacenters with AMD EPYC servers. • Chelsio demonstrated 100G iSCSI Offload performance in a White Box JBOF with T62100-LP-CR adapter. Numbers were collected for offload and non-offload modes. iSCSI Offload reached line-rate throughput of up to 98 Gbps and 2M IOPS; and delivered upto 3x the performance of non-offload. • Chelsio published an analysis of savings in CPU acquisition and operational costs as a result of using an offloaded NIC. For typical fixed applications, the savings could be in thousands of dollars, while increasing system performance. • Demartek published a benchmark of iSCSI offload vs. no offload for a class of real world applications. The results clearly demonstrated the necessity for offload for 100Gb iSCSI. • Chelsio published a quick start guide for a Windows Storage Spaces Direct installation without a switch in a Mesh, or a Ring configuration, thus drastically reducing costs in a typical 3-5 node install. In the WildNetApp A800 benchmark - Benchmark results using NetApp A800 all-flash storage systems having T6 adapters. Comparing the RDMA technologies - An SNIA webcast comparing iWARP and RoCE. RoCE vs. iWARP - An SNIA Q&A blog from the RoCE vs. iWARP webcast. Comparing the storage technologies - An SNIA webcast on FCoE, iSCSI and iSER. FCoE vs. iSCSi vs. iSER - An SNIA Q&A blog from the FCoE vs. iSCSI vs. iSER, a great storage debate. Chelsio Storage over IP Enable Data Infrastructures - Addressing the various application and workload demands by enabling more robust, productive, effective and efficient data infrastructures. Benefits of RDMA in Accelerating Ethernet Storage - A Q&A on different RDMA alternatives. Case StudiesRegional Library System Simplifies Their Data Center with a Windows Server 2016 Solution - S2D deployment using Chelsio iWARP RDMA. Creative Advertising Group Deployment of Windows 10 Workstations and Windows Server 2016 Storage Spaces Direct Using Chelsio iWARP Yields Higher Performance and Lower Cost - Microsoft Success Story Hyper-Converged Infrastructure Improves Health IT ROI - Providing full support for hyper-converged S2D deployments without requiring Top-of-Rack switches to support DCB capabilities. Latest Videos• NVMe-oF performance on an Arm platform Learn More• See a complete list of recent Chelsio newsletters here. • See a complete list of recent Chelsio white papers here. • See a complete list of Chelsio performance reports here. • See a complete list of Competitive Analysis reports here. • See a complete list of Case Studies here. • See a complete list of Videos here. • See a complete list of Webinars here. • See a complete list of Blogs here. |
|