**************************************** README **************************************** Chelsio Unified Wire for Linux Version : 4.1.0.7 Date : 10/09/2025 Overview ================================================================================ Chelsio Unified Wire software for Linux is an easy to use utility developed to provide installation of 64-bit Linux based drivers and tools for Chelsio's Unified Wire adapters. The Chelsio Unified Wire package provides an interactive installer to install various drivers and utilities. Chelsio Unified Wire consists of the following components: - Network (NIC/TOE) - iWARP RDMA Offload - NVMe-oF iWARP (Target and Initiator) - SPDK NVMe-oF iWARP (Target and Initiator) - NVMe-oF/TCP PDU Offload (Target and Initiator) - SoftiWARP Initiator - LIO iSCSI Target Offload/iSCSI PDU Offload Target - iSCSI PDU Offload Initiator - SPDK (user space) NVMe/TCP PDU Offload (Target and Host) - SPDK (user space) iSCSI PDU Offload (Target and Initiator) - Co-processor - Inline IPSec Offload ================================================================================ CONTENTS ================================================================================ - 1. Supported Operating Systems - 2. Supported Hardware - 3. How To Use - 4. Support Documentation - 5. Customer Support 1. Supported Operating Systems ================================================================================ The Chelsio software has been developed to run on 64-bit Linux based platforms. Following is the list of supported Linux distribution/Kernel(s). - Kernel.org linux-6.12.X^ ^ Kernel compiled on RHEL 9.6/9.5/9.4 2. Supported Hardware ================================================================================ Chelsio Drivers/Software and supported adapters =============================================== |########################|#####################################################| | Chelsio Adapter | Driver/Software | |########################|#####################################################| |T72200-FH |NIC/TOE,iWARP,NVMe-oF iWARP,SPDK NVMe-oF iWARP, | | |SoftiWARP,LIO iSCSI Target,iSCSI Initiator,NVMe/TCP | | |PDU Offload Target, NVMe/TCP PDU Offload Initiator, | | |SPDK NVMe/TCP PDU Offload Target,SPDK NVMe/TCP PDU | | |Host,SPDK iSCSI PDU Offload Target,SPDK iSCSI PDU | | |Offload Initiator,Co-processor,Inline IPSec Offload | |------------------------|-----------------------------------------------------| 3. How to Use ================================================================================ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Chelsio Unified Wire ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== - Package Manager yum/apt should be configured using any of the OS recommended ways to resolve and install missing packages. - make, gcc, kernel-devel and kernel-headers packages should be installed for the compilation of drivers and utilities. - python3 should be installed for Chelsio Unified Wire package scripts to run. In case of older distributions which do not support python3, source package CLI (make) should be used for the installation. Installing Chelsio Unified Wire =============================== Chelsio Unified Wire package can be installed using source only. The source package supports CLI. |#############################|################################################| |Configuration Tuning Option | Driver/Software installed | |#############################|################################################| |Unified Wire (Default) |NIC/TOE,iWARP,NVMe-oF iWARP,SPDK NVMe-oF iWARP, | | |SoftiWARP,LIO iSCSI Target,iSCSI Initiator, | | |NVMe/TCP PDU Offload Target,NVMe/TCP PDU Offload| | |Initiator,SPDK NVMe/TCP PDU Offload Target, | | |SPDK NVMe/TCP PDU Host,SPDK iSCSI PDU Offload | | |Target,SPDK iSCSI PDU Offload Initiator, | | |Co-processor,Inline IPSec Offload | |-----------------------------|------------------------------------------------| Installation of Chelsio Unified Wire ------------------------------------ Follow the steps mentioned below for installation using CLI. From source ------------ 1. Download Chelsio Unified Wire driver package. 2. Extract the downloaded package. [root@host~]# tar zxvfm ChelsioUwire-x.x.x.x.tar.gz 3. Change your current working directory to the Chelsio Unified Wire package directory and build and install the drivers, tools and libraries from source. [root@host~]# cd ChelsioUwire-x.x.x.x [root@host~]# make install 4. Flash the updated init and vpd on the Chelsio T7 adapter. INIT/VPD flashing instructions ------------------------------ Run the following command to flash INIT/VPD: # t7seeprom -flash -b write -f:< path to seeprom.bin file > Example: t7seeprom -flash -b 11:00.0 write -f:/lib/firmware/cxgb4/config/t72200_fh_mem.bin 5. Update the firmware on the Chelsio T7 adapter. Run the following commands to update the firmware manually: [root@host~]# rmmod iw_cxgb4 cxgb4 [root@host~]# modprobe cxgb4 fw_attach=0 [root@host~]# cxgbtool ethX loadfw /lib/firmware/cxgb4/config/t7fwbootstrap-2.0.1.0.bin [root@host~]# cxgbtool ethX loadfw /lib/firmware/cxgb4/t7fw.bin 6. Reboot your machine for changes to take effect. [root@host~]# reboot 7. Verify the firmware version after the host boots. The firmware (v2.0.0.80) is installed on the system, typically in /lib/firmware/cxgb4. The firmware version can be verified using: [root@host~]# ethtool -i ethX driver: cxgb4 ... firmware-version: 2.0.0.80, TP 0.1.30.0 ... Mounting debugfs ---------------- All the driver debug data is stored in debugfs, which will be mounted in most cases. If not, mount it manually. [root@host~]# mount -t debugfs none /sys/kernel/debug Configuring IPv6 ---------------- The interfaces should come up with a link-local IPv6 address for complete and fully functional IPv6 configuration. Update the Interface network-script with ONBOOT="yes". Uninstallion of Chelsio Unified Wire ==================================== You can uninstall the Chelsio Unified Wire package using source. Uninstall the package using CLI. Follow the steps mentioned below for uninstallation using CLI. From source ----------- 1. Change your current working directory to the Chelsio Unified Wire package directory. [root@host~]# cd ChelsioUwire-x.x.x.x 2. Uninstall the source. [root@host~]# make uninstall ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Network (NIC/TOE) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb The driver must be loaded by the root user. Any attempt to load the driver as a regular user may fail. - To load the driver in NIC mode (without offload support), [root@host~]# modprobe cxgb4 - To load driver in TOE mode (with offload support), [root@host~]# modprobe t4_tom NOTE: Offload support needs to be enabled upon each reboot of the system. This can be done manually as shown above. In VMDirect Path environment, it is recommended to load the offload driver using the following command: [root@host~]# modprobe t4_tom vmdirectio=1 Enabling TCP Offload ==================== Load the offload drivers and bring up the Chelsio interface. [root@host~]# modprobe t4_tom [root@host~]# ifconfig ethX up All TCP traffic will be offloaded over the Chelsio interface now. To see the number of connections offloaded, [root@host~]# cat /sys/kernel/debug/cxgb4//tids Enabling Busy Waiting ===================== Busy waiting/polling is a technique where a process repeatedly checks to see if an event has occurred, by spinning in a tight loop. By making use of similar technique, Linux kernel provides the ability for the socket layer code to poll directly on an Ethernet device's Rx queue. This eliminates the cost of interrupts and context switching, and with proper tuning allows to achieve latency performance similar to that of hardware. Chelsio's NIC and TOE drivers support this feature and can be enabled on Chelsio supported devices to attain improved latency. To make use of BUSY_POLL feature, follow the steps mentioned below: 1. Enable BUSY_POLL support in kernel config file by setting "CONFIG_NET_RX_BUSY_POLL=y". 2. Enable BUSY_POLL globally in the system by setting the values of following sysctl parameters depending on the number of connections: sysctl -w net.core.busy_read= sysctl -w net.core.busy_poll= Set the values of the above parameters to 50 for 100 or less connections; and 100 for more than 100 connections. NOTE: BUSY_POLL can also be enabled on a per-connection basis by making use of SO_BUSY_POLL socket option in the socket application code. Refer to the socket man-page for further details. Precision Time Protocol (PTP) ============================= IMPORTANT: This feature is not supported on RHEL 6.X platforms. ptp4l tool (installed during Unified Wire installation) is used to synchronise clocks. 1. Load the network driver on all master and slave nodes. [root@host~]# modprobe cxgb4 2. Assign IP addresses and ensure that master and slave nodes are connected. 3. Start the ptp4l tool on master using the Chelsio interface. [root@host~]# ptp4l -i -H -m 4. Start the tool on slave nodes. [root@host~]# ptp4l -i -H -m -s NOTE: To view the complete list of available options, refer to the ptp4l help manual. 5. Synchronize the system clock to a PTP hardware clock (PHC) on slave nodes. [root@host~]# phc2sys -s -c CLOCK_REALTIME -w -m Performance Tuning ================== TOE --- 1. Run the performance tuning script to update few kernel parameters using sysctl and map TOE queues to different CPUs. [root@host~]# t4_perftune.sh -n -s -Q ofld 2. Disable Rx Coalesce and DDP using the following steps: a. Create a COP policy. [root@host~]# cat all => offload !ddp !coalesce b. Compile the policy. [root@host~]# cop -d -o c. Apply the policy. [root@host~]# cxgbtool ethX policy NIC --- 1. Run the performance tuning script to update few kernel parameters using sysctl and map NIC queues to different CPUs [root@host~]# t4_perftune.sh -n -s -Q nic 2. Enable adaptive-rx. [root@host~]# ethtool -C enp2s0f4 adaptive-rx on Driver Unloading ================ - To unload the driver in NIC mode (without offload support), [root@host~]# rmmod cxgb4 - A reboot is required to unload the driver in TOE (with Offload support). To avoid rebooting, follow the steps mentioned below: 1. Load t4_tom driver with unsupported_allow_unload parameter. [root@host~]# modprobe t4_tom unsupported_allow_unload=1 2. Stop all the offloaded traffic, servers and connections. Check for the reference count. [root@host~]# cat /sys/module/t4_tom/refcnt If the reference count is 0, the driver can be directly unloaded. Skip to step 3. If the count is non-zero, load a COP policy which disables offload using the following procedure: i. Create a policy file which disables offload. [root@host~]# cat policy_file all => !offload ii. Compile and apply the output policy file. [root@host~]# cop –o no-offload.cop policy_file [root@host~]# cxgbtool ethX policy no-offload.cop 3. Unload the driver. [root@host~]# rmmod t4_tom [root@host~]# rmmod toecore [root@host~]# rmmod cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iWARP RDMA Offload ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== Ensure that following requirements are met before driver installation: - Uninstall any OFED present in the machine. - rdma-core-devel package should be installed on RHEL/Rocky 9.X systems. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb The driver must be loaded by the root user. Any attempt to load the driver as a regular user may fail. To load the iWARP driver, load the NIC and core RDMA drivers first. Run the following commands: [root@host~]# modprobe cxgb4 [root@host~]# modprobe iw_cxgb4 [root@host~]# modprobe rdma_ucm Optionally, you can start iWARP Port Mapper daemon to enable port mapping. [root@host~]# iwpmd Performance Tuning ================== Run the performance tuning script to update few kernel parameters using sysctl and map iWARP queues to different CPUs. [root@host~]# t4_perftune.sh -Q rdma -n -s Driver Unloading ================ To unload the iWARP driver, run the following command: [root@host~]# rmmod iw_cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NVMe-oF iWARP (Target and Initiator) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== Ensure that following requirements are met before driver installation: - Uninstall any OFED present in the machine. - The rdma-core-devel package should be installed on RHEL/Rocky 9.X systems. Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- 1. Ensure that the following NVMe-oF iWARP components are enabled in the kernel configuration file: CONFIG_BLK_DEV_NVME CONFIG_NVME_RDMA CONFIG_NVME_TARGET CONFIG_NVME_TARGET_RDMA CONFIG_BLK_DEV_NULL_BLK CONFIG_CONFIGFS_FS 2. If the NVMe-oF iWARP components are not enabled, enable them as follows: CONFIG_BLK_DEV_NVME=m CONFIG_NVME_RDMA=m CONFIG_NVME_TARGET=m CONFIG_NVME_TARGET_RDMA=m CONFIG_BLK_DEV_NULL_BLK=m CONFIG_CONFIGFS_FS=y 3. Compile and install the kernel, then boot into the new kernel and install the Chelsio Unified Wire. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb Follow the steps mentioned below on both target and initiator machines: 1. Load the following drivers: [root@host~]# modprobe iw_cxgb4 [root@host~]# modprobe rdma_ucm 2. Bring up the Chelsio interface(s). [root@host~]# ifconfig ethX x.x.x.x up 3. Mount configfs. [root@host~]# mount -t configfs none /sys/kernel/config 4. On target, load the following drivers: [root@host~]# modprobe null_blk [root@host~]# modprobe nvmet [root@host~]# modprobe nvmet-rdma On initiator, load the following drivers: [root@host~]# modprobe nvme [root@host~]# modprobe nvme-rdma Configuration ============= Target ------ 1. The following commands configure target using nvmetcli with a LUN: [root@host~]# nvmetcli /> cd subsystems /subsystems> create nvme-ram0 /subsystems> cd nvme-ram0/namespaces /subsystems/n...m0/namespaces> create nsid=1 /subsystems/n...m0/namespaces> cd 1 /subsystems/n.../namespaces/1> set device path=/dev/ram1 /subsystems/n.../namespaces/1> cd ../.. /subsystems/nvme-ram0> set attr allow_any_host=1 /subsystems/nvme-ram0> cd namespaces/1 /subsystems/n.../namespaces/1> enable /subsystems/n.../namespaces/1> cd ../../../.. /> cd ports /ports> create 1 /ports> cd 1/ /ports/1> set addr adrfam=ipv4. /ports/1> set addr trtype=rdma /ports/1> set addr trsvcid=4420 /ports/1> set addr traddr=102.1.1.102 /ports/1> cd subsystems /ports/1/subsystems> create nvme-ram0 2. Save the target configuration to a file. /ports/1/subsystems> saveconfig /root/nvme-target_setup /ports/1/subsystems> exit 3. To clear the targets, [root@host~]# nvmetcli clear Initiator --------- 1. Discover the target. [root@host~]# nvme discover -t rdma -a -s 4420 2. Connect to target. Connecting to a specific target. [root@host~]# nvme connect -t rdma -a -s 4420 -n Connecting to all targets configured on a portal. [root@host~]# nvme connect-all -t rdma -a -s 4420 3. List the connected targets. [root@host~]# nvme list 4. Format and mount the NVMe disks shown with the above command. 5. Disconnect from the target and unmount the disk. [root@host~]# nvme disconnect -d NOTE: nvme_disk_name is the name of the device (Ex:nvme0n1) and not the device path. Performance Tuning ------------------ 1. Ensure that Unified Wire is installed with NVMe Performance configuration tuning. 2. Run the performance tuning script to update few kernel parameters using sysctl and map iWARP queues to different CPUs. [root@host~]# t4_perftune.sh -n -Q rdma -s 3. Set the inline data size to 8192 before enabling the NVMe port. [root@host~]# mkdir /sys/kernel/config/nvmet/ports/1 [root@host~]# echo 8192 > /sys/kernel/config/nvmet/ports/1/param_inline_data_size Driver Unloading ================ Follow the steps mentioned below to unload the drivers: On target, run the following commands: [root@host~]# rmmod nvmet-rdma [root@host~]# rmmod nvmet [root@host~]# rmmod iw_cxgb4 On initiator, run the following commands: [root@host~]# rmmod nvme-rdma [root@host~]# rmmod nvme [root@host~]# rmmod iw_cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NVMe-oF/TCP PDU Offload (Target and Initiator) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- 1. Ensure that the following NVMe-oF/TCP components are enabled in the kernel configuration file: CONFIG_NVME_CORE CONFIG_NVME_FABRICS CONFIG_NVME_TCP CONFIG_NVME_TARGET CONFIG_NVME_TARGET_TCP CONFIG_BLK_DEV_NVME CONFIG_BLK_DEV_NULL_BLK CONFIG_CONFIGFS_FS 2. If the NVMe-oF/TCP components are not enabled, enable them as follows: CONFIG_NVME_CORE=m CONFIG_NVME_FABRICS=m CONFIG_NVME_TCP=m CONFIG_NVME_TARGET=m CONFIG_NVME_TARGET_TCP=m CONFIG_BLK_DEV_NVME=m CONFIG_BLK_DEV_NULL_BLK=m CONFIG_CONFIGFS_FS=y 3. Compile and install the kernel, then boot into the new kernel and install the Chelsio Unified Wire. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb Follow the steps mentioned below on both target and initiator machines: 1. Unload the existing drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb 2. To load NIC driver (cxgb4): [root@host~]# modprobe cxgb4 3. To load NVMe-oF/TCP PDU Offload Target driver (cnvmet-tcp): [root@host~]# rmmod nvmet-tcp nvmet nvme [root@host~]# modprobe cnvmet-tcp 4. To load NVMe-oF/TCP PDU Offload Host driver (cnvme-tcp): [root@host~]# rmmod nvme-tcp nvme-fabrics nvme-core [root@host~]# modprobe cnvme-tcp Configuration ============= Target ------ 1. The following commands configure target using nvmetcli with a LUN: [root@host~]# nvmetcli /> cd subsystems /subsystems> create nvme-ram0 /subsystems> cd nvme-ram0/namespaces /subsystems/n...m0/namespaces> create nsid=1 /subsystems/n...m0/namespaces> cd 1 /subsystems/n.../namespaces/1> set device path=/dev/ram1 /subsystems/n.../namespaces/1> cd ../.. /subsystems/nvme-ram0> set attr allow_any_host=1 /subsystems/nvme-ram0> cd namespaces/1 /subsystems/n.../namespaces/1> enable /subsystems/n.../namespaces/1> cd ../../../.. /> cd ports /ports> create 1 /ports> cd 1/ /ports/1> set addr adrfam=ipv4 /ports/1> set addr trtype=tcp /ports/1> set addr trsvcid=4420 /ports/1> set addr traddr=102.1.1.102 /ports/1> cd subsystems /ports/1/subsystems> create nvme-ram0 2. Save the target configuration to a file. /ports/1/subsystems> saveconfig /root/nvme-target_setup /ports/1/subsystems> exit 3. To clear the targets. [root@host~]# nvmetcli clear Initiator --------- 1. Discover the target. [root@host~]# nvme discover -t tcp -a -s 4420 2. Connect to target. Connecting to a specific target. [root@host~]# nvme connect -t tcp -a -s 4420 -n Connecting to all targets configured on a portal. [root@host~]# nvme connect-all -t tcp -a -s 4420 3. List the connected targets. [root@host~]# nvme list 4. Format and mount the NVMe disks shown with the above command. 5. Disconnect from the target and unmount the disk. [root@host~]# nvme disconnect -d NOTE: nvme_disk_name is the name of the device (Ex:nvme0n1) and not the device path. NOTE: The total number of connections depends on the devices used and I/O queues. For example, if the Initiator connects to 2 target devices with 4 I/O queues per device (-i 4), a total of 10 NVMe-oF TOE connections are used. Driver Unloading ================ Follow the steps mentioned below to unload the drivers: On target, run the following command: [root@host~]# rmmod cnvmet-tcp On initiator, run the following commands: [root@host~]# rmmod cnvme-tcp ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SPDK NVMe-oF iWARP (Target and Initiator) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== Ensure that the following requirements are met before driver installation: - Uninstall any OFED present in the machine. - The rdma-core-devel package should be installed on RHEL/Rocky Linux 9.X systems. Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- 1. Ensure that the following SPDK NVMe-oF iWARP components are enabled in the kernel configuration file: CONFIG_BLK_DEV_NVME CONFIG_NVME_RDMA CONFIG_NVME_TARGET CONFIG_NVME_TARGET_RDMA CONFIG_BLK_DEV_NULL_BLK CONFIG_CONFIGFS_FS 2. If the SPDK NVMe-oF iWARP components are not enabled, enable them as follows: CONFIG_BLK_DEV_NVME=m CONFIG_NVME_RDMA=m CONFIG_NVME_TARGET=m CONFIG_NVME_TARGET_RDMA=m CONFIG_BLK_DEV_NULL_BLK=m CONFIG_CONFIGFS_FS=y 3. Compile and install the kernel, then boot into the new kernel and install the Chelsio Unified Wire. RHEL/Rocky Linux 9.X -------------------- No additional kernel configuration is required. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb Follow the steps mentioned below on both target and initiator machines: 1. Load the iWARP RDMA Offload drivers: [root@host~]# modprobe iw_cxgb4 [root@host~]# modprobe rdma_ucm 2. Bring up the Chelsio interface(s). [root@host~]# ifconfig ethX x.x.x.x up Configuration ============= Target ------ 1. Download SPDK v24.05 LTS. [root@host~]# git clone https://github.com/spdk/spdk [root@host~]# cd spdk [root@host~]# git checkout v24.05 [root@host~]# git submodule update –init Change the below in CONFIG file. CONFIG_FIO_PLUGIN=y FIO_SOURCE_DIR= CONFIG_RDMA=y CONFIG_RDMA_SEND_WITH_INVAL=y 2. Run the below script to check that minimum SPDK dependencies are installed. [root@host~]# cd spdk [root@host~]# sh scripts/pkgdep.sh 3. Compile SPDK with RDMA and install it. [root@host~]# make clean ; ./configure --with-rdma; make; make install 4. Configure Huge Pages. [root@host~]# mkdir -p /mnt/huge [root@host~]# echo 8192 > /proc/sys/vm/nr_hugepages [root@host~]# echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages [root@host~]# echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages [root@host~]# vim /etc/fstab nodev /dev/hugepages hugetlbfs pagesize=2MB 0 0 nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 [root@host~]# mount -a [root@host~]# cd spdk [root@host~]# NRHUGE=8192 scripts/setup.sh 5. Start the SPDK NVMe-oF iWARP target. [root@host~]# spdk/build/bin/nvmf_tgt -m 0xFF --wait-for-rpc & [root@host~]# spdk/scripts/rpc.py iobuf_set_options --large-pool-count 8192 [root@host~]# spdk/scripts/rpc.py framework_start_init 6. Below are the sample configuration steps to create a malloc LUN. [root@host~]# spdk/scripts/rpc.py nvmf_create_transport -t RDMA -c 4096 -u 131072 -n 8192 -b 700 [root@host~]# spdk/scripts/rpc.py bdev_malloc_create -b Malloc$i 700 512 [root@host~]# spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 -d SPDK_Controller0 [root@host~]# spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 [root@host~]# spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 10.1.1.163 -s 4420 Initiator --------- SPDK NVMe-oF iWARP target works seamlessly with SPDK NVMe-oF iWARP initiator or any standard Linux kernel initiators. refer to the NVMe-oF iWARP Initiator section for steps to use Linux kernel initiator. To use the SPDK NVMe-oF iWARP Initiator, 1. Follow steps 1 through 4 of the SPDK Target section above to configure and install SPDK. 2. Connect to the target using fio plugin. [root@host~]# LD_PRELOAD=/root/spdk/build/fio/spdk_nvme fio --rw=randread/randwrite --name=random --norandommap=1 --ioengine=/root/spdk/build/fio/spdk_nvme --thread=1 --size=400m --group_reporting --exitall --invalidate=1 --direct=1 --filename='trtype=RDMA adrfam=IPv4 traddr=10.1.1.163 trsvcid=4420 subnqn=nqn.2016-06.io.spdk\:cnode0 ns=1' --time_based --runtime=20 --iodepth=64 --numjobs=4 --unit_base=1 --bs= --kb_base=1000 --ramp_time=3 Performance Tuning ------------------ 1. Ensure that Unified Wire is installed with NVMe Performance configuration tuning. 2. Run the performance tuning script to update few kernel parameters using iWARP queues to different CPUs sysctl and map. [root@host~]# t4_perftune.sh -n -Q rdma -s Driver Unloading ================ Follow the steps mentioned below to unload the drivers: [root@host~]# rmmod iw_cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SoftiWARP Initiator ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- 1. Ensure that the following SoftiWARP component is enabled in the kernel configuration file: CONFIG_RDMA_SIW 2. If SoftiWARP component is not enabled, enable it as follows: CONFIG_RDMA_SIW=m 3. Compile and install the kernel, then boot into the new kernel and install the Chelsio Unified Wire. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb Follow the steps mentioned below the Initiator/Client machine: 1. Load the network driver (cxgb4). [root@host~]# modprobe cxgb4 2. Load the SoftiWARP driver (siw). [root@host~]# modprobe siw 3. Unload the iWARP RDMA offload driver (iw_cxgb4). [root@host~]# rmmod iw_cxgb4 Configuration ============= Initiator/Client ---------------- IMPORTANT: Disable iWARP Port Mapper (iwpmd) service on Target and Initiator. [root@host~]# systemctl stop iwpmd 1. RDMA tool (rdma) is used to configure the siw device. It is installed by default from RHEL/Rocky Linux 9.6/9.5/9.4 distributions machine, install it from the latest iproute2 package (https://git.kernel.org/pub/scm/network/iproute2/iproute2.git). 2. Configure the siw device. [root@host~]# rdma link add type siw netdev [root@host~]# ifconfig ethX up 3. Verify the configuration using ibv_devices. 4. The initiator/client can now connect to the target/server machines. Refer to the NVMe-oF iWARP initiator and iSER initiator sections for steps to connect to the respective targets. Driver Unloading ================ Follow the below steps to unload the SoftiWARP and network drivers: [root@host~]# rmmod siw [root@host~]# rmmod cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ LIO iSCSI Target Offload/iSCSI PDU Offload Target ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- Follow the below steps to use a 6.12.X kernel version, 1. Download the kernel from kernel.org. 2. Extract the tar-ball. 3. Change your working directory to the kernel directory and invoke the installation menu. [root@host~]# make menuconfig 4. Select Device Drivers > Generic Target Core Mod (TCM) and ConfigFS Infrastructure. 5. Enable Linux-iSCSI.org iSCSI Target Mode Stack as a Module (if not already enabled). 6. Select Save. 7. Exit from the installation menu. 8. Continue with kernel installation as usual. 9. Boot into the new kernel and proceed with driver installation as directed in the "Driver Installation" section. Pre-requisites ============== Ensure that following requirements are met before driver installation: - targetcli is automatically installed by the Chelsio Unified Wire installer using the package manager yum/apt, if missing from the system. If you wish to use a different version, it is highly recommended to install v2.1.fb44 or higher versions. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb The driver must be loaded by the root user. Any attempt to load the driver as a regular user may fail. 1. Load network driver (cxgb4). [root@host~]# modprobe cxgb4 2. Bring up the interface. [root@host~]# ifconfig ethX up 3. Load the LIO iSCSI Target Offload driver (cxgbit). [root@host~]# modprobe cxgbit Driver Configuration ==================== Configuring LIO iSCSI Target ----------------------------- The LIO iSCSI Target needs to be configured before it can become useful. Refer to the targetcli man page using the "man targetcli" command to do so. Offloading LIO iSCSI Connection -------------------------------- To offload the LIO iSCSI Target, use the following command: [root@host~]# targetcli /iscsi//tpg1/portals/\:3260 enable_offload boolean=True Execute the above command for every portal address listening on the Chelsio interface. Running LIO iSCSI and Network Traffic Concurrently -------------------------------------------------- If you wish to run network traffic with offload support (TOE) and LIO iSCSI traffic together, follow the steps mentioned below: 1. If not done already, load network driver with offload support (TOE). [root@host~]# modprobe t4_tom 2. Create a new policy file. [root@host~]# cat 3. Add the following lines to offload all traffic except LIO iSCSI. listen && src port && src host => !offload all => offload 4. Compile the policy. [root@host~]# cop -d -o 5. Apply the policy. [root@host~]# cxgbtool ethX policy NOTE: The policy applied using cxgbtool is not persistent and should be applied every time drivers are reloaded or the machine is rebooted. The applied cop policies can be read using, [root@host~]# cat /proc/net/offload/toeX/read-cop Performance Tuning ================== 1. Run the performance tuning script to update few kernel parameters using sysctl and map LIO Target queues to different CPUs. [root@host~]# t4_perftune.sh -Q iSCSIT -n -s 2. For maximum performance, it is recommended to use iSCSI PDU offload initiator. - For MTU 9000, no additional configuration is needed. - For MTU 1500, set InitialR2T to No using: [root@host~]# targetcli iscsi//tpg1/ set parameter InitialR2T=No Driver Unloading ================ Unloading LIO iSCSI Target Offload driver ----------------------------------------- To unload the LIO iSCSI Target Offload driver, follow the steps mentioned below: 1. Log out from the initiator 2. Run the following command: [root@host~]# targetcli /iscsi//tpg1/portals/\:3260 enable_offload boolean=False Execute the above command for every portal address listening on the Chelsio interface. 3. Unload the driver. [root@host~]# rmmod cxgbit Unloading Network driver ------------------------ - To unload the driver in NIC mode (without offload support). [root@host~]# rmmod cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iSCSI PDU Offload Initiator ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== Ensure that following requirements are met before driver installation: - The iSCSI PDU Offload Initiator driver (cxgb4i) runs on top of NIC driver (cxgb4) and open-iscsi version greater than 2.0-872 on a Chelsio card. - openssl-devel package should be installed. Driver Loading ============== IMPORTANT: Ensure that all inbox drivers are unloaded before proceeding with unified wire drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb The driver must be loaded by the root user. Any attempt to loading the driver as a regular user may fail. Load cxgb4i driver using the following command: [root@host~]# modprobe cxgb4i The cxgb4i module registers a new transport class "cxgb4i". If loading of cxgb4i displays "unkown symbols found" error in dmesg, follow the steps mentioned below: 1. View all the loaded iSCSI modules [root@host~]# lsmod | grep iscsi 2. Now, unload them using the following command: [root@host~]# rmmod 3. Finally reload the cxgb4i driver Accelerating open-iSCSI Initiator ================================= The following steps need to be taken to accelerate the open-iSCSI initiator: I. Configuring interface (iface) file ------------------------------------- Create the file automatically by loading cxgb4i driver and then executing the following command: [root@host~]# iscsiadm -m iface Alternatively, you can create an interface file located under iface directory for the new transport class cxgb4i in the following format: iface.iscsi_ifacename = iface.hwaddress = iface.transport_name = cxgb4i iface.net_ifacename = iface.ipaddress = Here, iface.iscsi_ifacename : Interface file in /etc/iscsi/ifaces/ iface.hwaddress : MAC address of the Chelsio interface via which iSCSI traffic will be running. iface.transport_name : Transport name, which is cxgb4i. iface.net_ifacename : Chelsio interface via which iSCSI traffic will be running. iface.ipaddress : IP address which is assigned to the interface. II. Discovery and Login ----------------------- 1. Start Daemon from /sbin. [root@host~]# iscsid NOTE: If iscsid is already running, then kill the service and start it as shown above after installing the Chelsio Unified Wire package. 2. Discover iSCSI target. [root@host~]# iscsiadm -m discovery -t st -p : -I 3. Log into an iSCSI target. [root@host~]# iscsiadm -m node -T -p : -I -l If the login fails with an error message in the format of ERR! MaxRecvSegmentLength too big. Need to be <= . in dmesg, edit the iscsi/iscsid.conf file and change the setting for MaxRecvDataSegmentLength: node.conn[0].iscsi.MaxRecvDataSegmentLength = 8192 IMPORTANT: Always take a backup of iscsid.conf file before installing Chelsio Unified Wire package. Although the file is saved to iscsid.rpmsave after uninstalling the package using RPM, you are still advised to take a backup. 4. Log out from an iSCSI Target. [root@host~]# iscsiadm -m node -T -p : -I -u NOTE: Other options can be found by typing iscsiadm --help. Performance Tuning ================== If iSCSI Initiator IRQs pose a bottleneck for multiple connections, you can improve IOPS performance using the steps mentioned below. 1. Enable iSCSI multi-queue. In 3.18+ kernels, add the below entry to the grub configuration file and reboot the machine: scsi_mod.use_blk_mq=1 2. Run the performance tuning script to update few kernel parameters using sysctl and map iSCSI Initiator queues to different CPUs. [root@host~]# t4_perftune.sh -Q iSCSI -n -s 3. Load initiator driver. [root@host~]# modprobe cxgb4i Driver Unloading ================ [root@host~]# rmmod cxgb4i [root@host~]# rmmod libcxgbi ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SPDK iSCSI, NVMe/TCP PDU Offload (Target and Initiator) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Loading ============== 1. Unload the existing drivers. [root@host~]# rmmod csiostor cxgb4i cxgbit iw_cxgb4 chcr cxgb4vf cxgb4 libcxgbi libcxgb 2. To load NIC driver (cxgb4), [root@host~]# modprobe cxgb4 3. To load User Space NVMe/TCP PDU Offload driver (cstor), [root@host~]# modprobe cstor The library (libcstor.so) will be installed by the package at /lib64/ Configuration ============= Configuring SPDK iSCSI PDU Offload target/initiator --------------------------------------------------- Follow the steps mentioned below on the SPDK iSCSI PDU Offload target/ initiator machines: 1. SPDK v24.05, customized to support iSCSI PDU Offload and kernel bypass for SPDK iSCSI Target and Initiator is part of Chelsio Unified Wire package. Change your current working directory to the Chelsio SPDK directory. [root@host~]# cd ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05 2. Configure Huge Pages. [root@host~]# mkdir -p /mnt/huge [root@host~]# echo 8192 > /proc/sys/vm/nr_hugepages [root@host~]# echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages [root@host~]# echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages [root@host~]# vim /etc/fstab nodev /dev/hugepages hugetlbfs pagesize=2MB 0 0 nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 [root@host~]# mount -a [root@host~]# NRHUGE=8192 scripts/setup.sh 3. Start the reactor. [root@host spdk]# ./build/bin/iscsi_tgt -m SPDK iSCSI PDU Offload Target ----------------------------- Below are the sample configuration steps to create a LUN with Malloc bdev of 256MB. SPDK_PATH=$'ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05' $SPDK_PATH/scripts/rpc.py bdev_malloc_create -b Malloc1 256 512 $SPDK_PATH/scripts/rpc.py iscsi_create_portal_group 1 102.11.11.45:3260 $SPDK_PATH/scripts/rpc.py iscsi_create_initiator_group 2 ANY $SPDK_PATH/scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc1:0" 1:2 64 -d The SPDK iSCSI PDU Offload Initiator, Kernel iSCSI PDU Offload or Open-iSCSI Initiators can connect to the configured target. SPDK iSCSI PDU Offload Initiator -------------------------------- SPDK iSCSI PDU Offload Initiator can connect to an iSCSI Target using the below steps: SPDK_PATH=$'ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05' $SPDK_PATH/scripts/rpc.py bdev_iscsi_create -b -i --url Ex: $SPDK_PATH/scripts/rpc.py bdev_iscsi_create -b iSCSI1 -i iqn.2016-06.io.spdk:init1 --url 'iscsi://102.11.11.45/iqn.2000-01.com.chelsio.host:'"1"'/0?header_digest=crc32c&data_digest=crc32c' Configuring SPDK NVMe/TCP PDU Offload target/initiator ------------------------------------------------------- Follow the steps mentioned below on the SPDK NVMe/TCP PDU Offload target/ initiator machines: 1. SPDK v24.05, customized to support NVMe/TCP PDU Offload and kernel bypass for SPDK NVMe/TCP Target and Initiator is part of Chelsio Unified Wire package. Change your current working directory to the Chelsio SPDK directory. [root@host~]# cd ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05 2. Configure Huge Pages. [root@host~]# mkdir -p /mnt/huge [root@host~]# echo 8192 > /proc/sys/vm/nr_hugepages [root@host~]# echo 0 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages [root@host~]# echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages [root@host~]# vim /etc/fstab nodev /dev/hugepages hugetlbfs pagesize=2MB 0 0 nodev /mnt/huge hugetlbfs pagesize=1GB 0 0 [root@host~]# mount -a [root@host~]# NRHUGE=8192 scripts/setup.sh 3. Start the reactor. [root@host spdk]# ./build/bin/nvmf_tgt -m SPDK NVMe/TCP PDU Offload Target -------------------------------- Below are the sample configuration steps to create a LUN with Malloc bdev of 256MB. SPDK_PATH=$'ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05' $SPDK_PATH/scripts/rpc.py nvmf_create_transport -t TCP $SPDK_PATH/scripts/rpc.py bdev_malloc_create -b Malloc1 256 512 $SPDK_PATH/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1 $SPDK_PATH/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 $SPDK_PATH/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 102.11.11.45 -s 4420 -f ipv4 The SPDK NVMe/TCP PDU Offload Initiator, Kernel NVMe/TCP Initiators can connect to the configured target. SPDK NVMe/TCP PDU Offload Initiator ----------------------------------- SPDK NVMe/TCP PDU Offload Initiator can connect to an NVMe/TCP Target using the below steps: SPDK_PATH=$'ChelsioUwire-4.1.0.7/build/src/chspdk/spdk-24.05' $SPDK_PATH/scripts/rpc.py bdev_nvme_attach_controller -b < bdev name > -t TCP -a < Target IP > -f IPv4 -s 4420 -n < target NQN > Ex: $SPDK_PATH/scripts/rpc.py bdev_nvme_attach_controller -b Nvme1 -t TCP -a 102.11.11.86 -f IPv4 -s 4420 -n nvme-ram1 Driver Unloading ================ [root@host~]# rmmod cstor [root@host~]# rmmod cxgb4 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Co-processor ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Loading ============== 1. To load Crypto Offload driver in Co-processor mode (chcr), [root@host~]# modprobe cxgb4 [root@host~]# modprobe chcr 2. Bring up the Chelsio network interface. [root@host~]# ifconfig ethX up Where ethX is the Chelsio interface. Co-processor TLS Offload ======================== Configuration ------------- NOTE: Use openssl version 3.6.0 onwards. 1. Download the latest stable version from the nginx website. [root@host ~]# tar xf openssl-3.6.0.tar.gz [root@host ~]# cd openssl-3.6.0/ [root@host openssl-3.6.0]# ./config shared enable-ktls --prefix=/root/openssl-v360-ktls --openssldir=/root/openssl-v360-ktls -Wl,-R/root/openssl-v360-ktls/lib64 [root@host openssl-3.6.0]# make && make install [root@host openssl-3.6.0]# cd /root/openssl-v360-ktls [root@host openssl-v360-ktls]# vim openssl.cnf <======================= Edit as follows [openssl_init] providers = provider_sect ssl_conf = ssl_section [ssl_section] system_default = system_default_section [system_default_section] options = ktls 2. Compile and install nginx. [root@host ~]# tar xf nginx-1.28.0.tar.gz [root@host ~]# cd nginx-1.28.0/ [root@host nginx-1.28.0]# ./configure --prefix=/usr/local/nginx-openssl-v360-ktls --with-http_ssl_module --with-http_v2_module --with-http_dav_module --with-cc-opt="-DNGX_SSL_SENDFILE -DOPENSSL_API_COMPAT=10101 -I /root/openssl-3.6.0/include/" --with-ld-opt="-L/root/openssl-v360-ktls/lib64 -Wl,-R/root/openssl-v360-ktls/lib64" [root@host nginx-1.28.0]# make && make install 3. Configure the nginx server by updating the required settings in /usr/local/nginx/nginx.conf file. 4. Start nginx server. [root@host~]# ./usr/local/nginx-openssl-v360-ktls/nginx The nginx server will be offloaded by Chelsio Co-processor now. 5. The Client can now connect to the Server and download the files. Co-processor IPsec Offload ========================== Configuration ------------- NOTE: - DUT is the machine equipped with a T7 card. - PEER is the corresponding companion machine. I ) Using `ip xfrm` ------------------- 1. Establish IPSec SAs and Policies using the below commands. a) On DUT ---------- [root@DUT~] ip xfrm state add src $DUT_IP dst $PEER_IP proto esp spi 0x0a1b2c3d reqid 0x5f7c1a6b mode transport aead "rfc4106(gcm(aes))" 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn [root@DUT~] ip xfrm state add src $PEER_IP dst $DUT_IP proto esp spi 0x4e5f6a7b reqid 0x9abcdef0 mode transport aead "rfc4106(gcm(aes))" 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn [root@DUT~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir out tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport [root@DUT~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir fwd tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport [root@DUT~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir in tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport NOTE: ethX refers to the interface of DUT adapter. b) On PEER ---------- [root@PEER~] ip xfrm state add src $DUT_IP dst $PEER_IP proto esp spi 0x0a1b2c3d reqid 0x5f7c1a6b mode transport aead 'rfc4106(gcm(aes))' 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn [root@PEER~] ip xfrm state add src $PEER_IP dst $DUT_IP proto esp spi 0x4e5f6a7b reqid 0x9abcdef0 mode transport aead 'rfc4106(gcm(aes))' 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn" [root@PEER~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir out tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport [root@PEER~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir fwd tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport [root@PEER~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir in tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport 2. Verify SA and policy status: [root@DUT~] ip xfrm state list II) Using StrongSwan -------------------- 1. Deploy certificates to DUT and PEER. /usr/local/etc/swanctl/private/ # Private Keys of respective hosts /usr/local/etc/swanctl/x509/ # Host Certificates of respective hosts usr/local/etc/swanctl/x509ca/ # CA Certificate 2. Configure "/usr/local/etc/strongswan.conf" (enable `make_before_break = yes`). [root@DUT~] cat /usr/local/etc/strongswan.conf charon { load_modular = yes plugins { include strongswan.d/charon/*.conf make_before_break = yes # <------- Append "make_before_break" parameter } } NOTE: "make_before_break" ensures seamless tunnel rekeying by creating a new SA before tearing down the old one, preventing traffic disruption. 3. Update "/usr/local/etc/swanctl/swanctl.conf" on both the DUT and PEER according to their IP configurations. DUT Machine ----------- [root@DUT~] cat /usr/local/etc/swanctl/swanctl.conf connections { gw-port0 { local_addrs = remote_addrs = local { auth = pubkey certs = dut-Cert.der id = C=CH, O=test.ipsec.tunnel, CN=dut-common-name } remote { auth = pubkey id = C=CH, O=test.ipsec.tunnel, CN=peer-common-name } children { net-net-0 { rekey_time = 4000000 esp_proposals = aes128gcm128-esn-noesn hw_offload = no mode = tunnel } } version = 2 mobike = no rekey_time = 8500000 reauth_time = 9000000 proposals = aes256-sha1-x25519 } } PEER Machine ------------ [root@PEER~] cat /usr/local/etc/swanctl/swanctl.conf connections { gw-port0 { local_addrs = remote_addrs = local { auth = pubkey certs = peer-Cert.der id = C=CH, O=test.ipsec.tunnel, CN=peer-common-name } remote { auth = pubkey id = C=CH, O=test.ipsec.tunnel, CN=dut-common-name } children { net-net-0 { rekey_time = 5000000 esp_proposals = aes128gcm128-esn-noesn hw_offload = no mode = tunnel } } version = 2 mobike = no rekey_time = 11000000 reauth_time = 12000000 proposals = aes256-sha1-x25519 } } 4. Start and load configurations on both DUT and PEER Machines [root@DUT~] /usr/local/libexec/ipsec/charon [root@DUT~] swanctl --load-creds [root@DUT~] swanctl --load-conns 5. Initiate connection from DUT [root@DUT~] swanctl -i -c net-net-0 6. Verify connection [root@DUT~] swanctl --list-sas -P Driver Unloading ================ [root@host~]# rmmod chcr ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Inline IPSec Offload ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The following applications can be used to configure IPSec tunnels: a. ip xfrm b. strongSwan Kernel Configuration ==================== Kernel.org linux-6.12.X ----------------------- - If the IPSec components are not enabled, enable them as follows: CONFIG_XFRM=y CONFIG_XFRM_OFFLOAD=y CONFIG_INET_ESP=y CONFIG_INET_ESP_OFFLOAD=m CONFIG_INET6_ESP=m CONFIG_INET6_ESP_OFFLOAD=m CONFIG_INET6_ESPINTCP=y CONFIG_CRYPTO=y CONFIG_CRYPTO_AEAD=y CONFIG_CRYPTO_GCM=m Driver Loading ============== 1. Load the Inline IPSec crypto offload driver. To run IPSec Offload in NIC Mode (without tcp offload support): [root@DUT~] modprobe cxgb4 [root@DUT~] modprobe ch_ipsec 2. Bring up the Chelsio interface(s). [root@DUT~] ifconfig ethX up NOTE: ethX refers to the interface of T7 adapter. Configuration ============= NOTE: - DUT is the machine equipped with a T7 card. - PEER is the corresponding companion machine. - The PEER must run in non-offload mode. I ) Using `ip xfrm` ------------------- 1. Establish IPSec SAs and Policies using the below commands. a) On DUT ---------- [root@DUT~] ip xfrm state add src $DUT_IP dst $PEER_IP proto esp spi 0x0a1b2c3d reqid 0x5f7c1a6b mode transport aead "rfc4106(gcm(aes))" 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 offload dev ethX dir out replay-window 128 flag esn [root@DUT~] ip xfrm state add src $PEER_IP dst $DUT_IP proto esp spi 0x4e5f6a7b reqid 0x9abcdef0 mode transport aead "rfc4106(gcm(aes))" 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 offload dev ethX dir in replay-window 128 flag esn [root@DUT~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir out tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport [root@DUT~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir fwd tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport [root@DUT~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir in tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport NOTE: ethX refers to the interface of DUT adapter. b) On PEER ---------- [root@PEER~] ip xfrm state add src $DUT_IP dst $PEER_IP proto esp spi 0x0a1b2c3d reqid 0x5f7c1a6b mode transport aead 'rfc4106(gcm(aes))' 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn [root@PEER~] ip xfrm state add src $PEER_IP dst $DUT_IP proto esp spi 0x4e5f6a7b reqid 0x9abcdef0 mode transport aead 'rfc4106(gcm(aes))' 0x0f83e4cb43bd443a839bef61378d294577386f72 128 sel src 0.0.0.0/0 dst 0.0.0.0/0 replay-window 128 flag esn" [root@PEER~] ip xfrm policy add src $PEER_IP dst $DUT_IP dir out tmpl src $PEER_IP dst $DUT_IP proto esp reqid 0x9abcdef0 mode transport [root@PEER~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir fwd tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport [root@PEER~] ip xfrm policy add src $DUT_IP dst $PEER_IP dir in tmpl src $DUT_IP dst $PEER_IP proto esp reqid 0x5f7c1a6b mode transport 2. Verify SA and policy status: [root@DUT~] ip xfrm state list II) Using StrongSwan -------------------- 1. Deploy certificates to DUT and PEER. /usr/local/etc/swanctl/private/ # Private Keys of respective hosts /usr/local/etc/swanctl/x509/ # Host Certificates of respective hosts usr/local/etc/swanctl/x509ca/ # CA Certificate 2. Configure "/usr/local/etc/strongswan.conf" (enable `make_before_break = yes`). [root@DUT~] cat /usr/local/etc/strongswan.conf charon { load_modular = yes plugins { include strongswan.d/charon/*.conf make_before_break = yes # <------- Append "make_before_break" parameter } } NOTE: "make_before_break" ensures seamless tunnel rekeying by creating a new SA before tearing down the old one, preventing traffic disruption. 3. Update "/usr/local/etc/swanctl/swanctl.conf" on both the DUT and PEER according to their IP configurations. DUT Machine ----------- NOTE: Ensure `hw_offload = yes` on DUT. [root@DUT~] cat /usr/local/etc/swanctl/swanctl.conf connections { gw-port0 { local_addrs = remote_addrs = local { auth = pubkey certs = dut-Cert.der id = C=CH, O=test.ipsec.tunnel, CN=dut-common-name } remote { auth = pubkey id = C=CH, O=test.ipsec.tunnel, CN=peer-common-name } children { net-net-0 { rekey_time = 4000000 esp_proposals = aes128gcm128-esn-noesn hw_offload = yes mode = tunnel } } version = 2 mobike = no rekey_time = 8500000 reauth_time = 9000000 proposals = aes256-sha1-x25519 } } PEER Machine ------------ NOTE: Ensure `hw_offload = no` on PEER [root@PEER~] cat /usr/local/etc/swanctl/swanctl.conf connections { gw-port0 { local_addrs = remote_addrs = local { auth = pubkey certs = peer-Cert.der id = C=CH, O=test.ipsec.tunnel, CN=peer-common-name } remote { auth = pubkey id = C=CH, O=test.ipsec.tunnel, CN=dut-common-name } children { net-net-0 { rekey_time = 5000000 esp_proposals = aes128gcm128-esn-noesn hw_offload = no mode = tunnel } } version = 2 mobike = no rekey_time = 11000000 reauth_time = 12000000 proposals = aes256-sha1-x25519 } } 4. Start and load configurations on both DUT and PEER Machines [root@DUT~] /usr/local/libexec/ipsec/charon [root@DUT~] swanctl --load-creds [root@DUT~] swanctl --load-conns 5. Initiate connection from DUT [root@DUT~] swanctl -i -c net-net-0 6. Verify connection [root@DUT~] swanctl --list-sas -P Driver Unloading ================ NOTE: Ensure that all IPSec tunnels are deleted before attempting to unload driver. [root@DUT~] rmmod ch_ipsec 4. Support Documentation ================================================================================ The documentation for this release can be found inside the ChelsioUwire-x.x.x.x/docs folder. It contains: - README (this document) 5. Customer Support ================================================================================ Installer issues ---------------- In case of any failures while running the Chelsio Unified Wire Installer, collect the below: - Entire make command output, if installed using the makefile Logs collection --------------- For any other issues, run the below command to collect all the necessary log files: [root@host~]# chdebug A compressed tar ball, chelsio_debug_logs_with_cudbg.tar.bz2 will be created with all the logs. For kernel panics, following files need to be provided for analysis. vmcore, vmcore-dmesg.txt, vmlinux, System.map-$(uname -r), Chelsio modules .ko files   NOTE: Please power cycle (not reboot) the host after log collection. Contact the Chelsio support at: support@chelsio.com with relevant logs for any issues. ******************************************************************************** Copyright (C) 2025 Chelsio Communications. All Rights Reserved. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Chelsio Communications. Chelsio Communications assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Chelsio Communications.