|Chelsio has introduced Server Offload firmware to enable complete Offload value proposition for its memory-free version of adapters, thus enabling more cost effective CNA solutions for servers to enable:
NVMeF clients, Azure Stack installations, and iSCSI/FCoE initiators. This firmware enables 256 concurrent offloaded connections with a roadmap to enable 4k connections by year-end.
T5 OpenPOWER Ready-compliant 10/40GbE Unified Wire adapters are designed for storage, cloud computing, HPC, virtualization and other data center applications in an OpenPOWER Compute and
joined the Open Compute Project (OCP) and
announced the contribution of its 10GbE and 40GbE Unified Wire Adapters to the project. The T5 adapter contributions to the OCP help accelerate the future of open hardware in the networking ecosystem.
Chelsio and NVIDIA recently hosted a
webinar to discuss the benefits of GPUDirect RDMA using T5 40GbE iWARP solution. iWARP (TCP/RDMA) leverages existing Ethernet infrastructure and requires no new protocols, interoperability or long maturity period, emerging as the no-risk path for Ethernet-based,
large-scale NVIDIA GPU clustering.
launched a new and enhanced worldwide channel program, providing partners with added resources to take advantage of opportunities in high-growth markets such as Microsoft Private Cloud, GPU Computing, Software-Defined Storage, Hyper-Scale Computing and
- Chelsio released a complete driver
compatible for Microsoft Windows Server 2016 Technical Preview 5 (Full OS, NanoServer, Client) including Network, SR-IOV, SMB Direct, VxLAN, NVGRE, Packet Direct, vRSS, DCB, Storage Replica Offload and iSCSI Initiator Offload.
- Chelsio continues to lead in high performance and high efficiency Ethernet networking and delivers groundbreaking bidirectional packet processing results with the recent DPDK
- Chelsio T5 is an ideal fit for high-performance storage networking solutions across the range of storage protocols (iSCSI, iSER, SMB, SMB Direct, Lustre-RDMA, NFS-RDMA and FCoE). Chelsio’s hardware offloaded 40GbE iSCSI solution is
in-boxed in FreeBSD.
- Chelsio has released support for Asynchronous TCP sockets for offloading TCP into the FreeBSD kernel. It is now possible to maintain Direct Data Placement (DDP) perfectly when using TOE and to achieve 40Gb of TCP Offloaded traffic for a single connection
out of a single port at 1% CPU, with off the shelf FreeBSD kernel.
- Chelsio also released support for offloading Linux iSCSI LIO Target into the Linux kernel. In conjunction with offloaded Open-iSCSI on Server Offload Adapters, it is now possible to provide offloaded iSCSI on both ends, in the most cost effective fashion,
while maintaining the Linux distribution support agreements. Hardware offloaded iSCSI LIO Target allows all the benefits of offload for block traffic while leveraging the vast installed base of Software iSCSI Initiators in all OS’es, constituting the shortest
time-to-revenue solution for high performance SAN while NVMe Fabrics is going through the design and adoption process.
- Chelsio has released support for SCST iSCSI Driver. With this release, Chelsio’s iSCSI support is now comprehensive, with offloaded support for LIO, SCST, FreeBSD, Windows, VMware and Linux.
- Chelsio has
released driver for the latest Mac OS X, El Capitan with Thunderbolt support. This makes Chelsio the only high performance connectivity provider with a full range of 40GbE storage and network solutions to support Mac OS X.
- Chelsio has released
VMware Certified L2 Native NIC driver for ESXi6.0.
- Chelsio has released support for RDMA Block Device Driver for NVMe over Fabrics. This driver clearly demonstrates only a 10% latency increase for remote SSD access relative to local SSD access.
- Chelsio has released support for HW Time Stamping on Linux.
- Chelsio has released iWARP RDMA support for Hadoop.
Chelsio added a number of News reports and white papers to its technical library, including:
In the Wild
See a complete list of recent Chelsio white papers
See a complete list of Chelsio performance reports
See a complete list of Competitive Analysis reports
See a complete list of Videos
See a complete list of Webnars
NVMeF, how this lightning fast network connection works
– NVMeF, using iWARP can be used to provide RDMA access to/from external storage media, typically flash, at local, direct-attach speeds.
Azure stack requires RDMA networking – using Storage Spaces Direct in an Azure Stack fabric will require RDMA-capable NIC.
Chelsio T5 iWARP RDMA for Disaster Recovery – All-round solution for mission-critical applications with Microsoft’s
Storage Replica and Chelsio T5 iWARP RDMA.
Chelsio RDMA technology - Ease of use with excellent performance
– a hands-on evaluation of SMB Direct using Chelsio T5 iWARP adapters.
Scale the Datacenter with Windows Server SMB Direct
– RDMA over Ethernet goes Mainstream.
Chelsio participated at the
GPU Technology Conference, San Jose, April 4th - 7th. Presented High-Performance CUDA clustering
sessions and showcased a live
demo at the booth.
Why use a dumb NIC when you can have an Offload NIC?
2x10Gb Server Offload cards for $199. 256 connections of offload of any kind (TOE, iSCSI, iWARP RDMA, FCoE) for the price of a stateless offload NIC.
|Latest Software and Drivers
- T5/T4 Unified Wire v184.108.40.206 for Linux
- T5/T4 Unified Wire v220.127.116.11 for Windows Server 2016 Technical Preview 5
- T5 Memory-free v18.104.22.168 for Linux
- T5 LIO Target Offload v22.214.171.124
- T5 Linux iSCSI Target v126.96.36.199 with SCST support
- T5 iWARP RDMA v188.8.131.52 for Hadoop
- T5 GPUDirect RDMA v184.108.40.206 for Linux
- T5/T4 Network Driver v1.10.11 for MAC OS X
- T5/T4 Network Driver for XenServer 6.5.0
- T5/T4 Native Network Driver v0.0.1.42 for ESXi6.0
Visit Chelsio's Download Center