**************************************** README **************************************** Chelsio Unified Wire for VMware ESXi 7.0 Version : 5.3.0.20 Date : 06/24/2020 Overview ================================================================================ The Chelsio Unified Wire package installs various drivers and utilities and consists of the following software: - Native Network (NIC) Driver with SR-IOV support - iSCSI Offload Initiator Driver - iSER Offload Initiator Driver - NVMe-oF Offload Initiator Driver NOTE: Drivers are not VMware certified. ================================================================================ CONTENTS ================================================================================ - 1. Supported Operating Systems - 2. Supported Cards - 3. How to Use - 4. Support Documentation - 5. Customer Support 1. Supported Operating Systems ================================================================================ Host ==== - ESXi 7.0 Virtual Machine (with VFs): =========================== - RHEL 8.2, 4.18.0-193.el8.x86_64 - RHEL 8.1, 4.18.0-147.el8.x86_64 - RHEL 8.0, 4.18.0-80.el8.x86_64 - RHEL 7.8, 3.10.0-1127.el7.x86_64 - RHEL 7.7, 3.10.0-1062.el7.x86_64 - RHEL 6.10, 2.6.32-754.el6.x86_64 - Ubuntu 20.04, 5.4.0-26-generic - Ubuntu 18.04.4, 4.15.0-76-generic - Kernel.org 5.4.45 - Kernel.org linux-4.19.98 - Windows Server 2019 NOTE: T5 Adapters are not supported in Windows Guest with SR-IOV. 2. Supported Cards ================================================================================ |########################|#####################################################| | Chelsio Adapter | Driver/Software | |########################|#####################################################| |T62100-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T62100-LP-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T62100-SO-CR |NIC | |------------------------|-----------------------------------------------------| |T6225-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T6225-LL-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T6425-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T580-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T580-LP-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T580-SO-CR |NIC | |------------------------|-----------------------------------------------------| |T540-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T540-LP-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T540-SO-CR |NIC | |------------------------|-----------------------------------------------------| |T540-BT |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T520-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T520-LL-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T520-SO-CR |NIC | |------------------------|-----------------------------------------------------| |T520-BT |NIC,iSCSI,iSER,NVMe-oF | |------------------------------------------------------------------------------| Memory-free Adapters ==================== |########################|#####################################################| | Chelsio Adapter | Driver/Software | |########################|#####################################################| |T6225-SO-CR |NIC,iSCSI*,iSER*,NVMe-oF* | |------------------------------------------------------------------------------| * 256 IPv4/128 IPv6 offload connections supported. 3. How to Use ================================================================================ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Unified Wire ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Installation =================== a) Download the driver package from https://service.chelsio.com. b) Put the host in maintenance mode using the vSphere Client (Inventory->Host->Enter Maintenance Mode) c) Install the drivers. [root@host:~] cp *.zip /productLocker/ [root@host:~] cp *.zip /var/log/vmware/ [root@host:~] esxcli software component apply --depot=/productLocker/VMW-esx-7.0.0-Chelsio-Drivers-x.x.x.x-1OEM.700.1.0.15843807.zip --no-sig-check Installation Result Components Installed: Chelsio-Drivers_x.x.x.x-1OEM.700.1.0.15843807 Components Removed: Components Skipped: Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true d) After installation/update completes successfully, exit from maintenance mode and reboot the system. e) Verify that the drivers are installed successfully. [root@host:~] esxcli software component vib list --component=Chelsio-Drivers Name Version Vendor Acceptance Level Install Date -------- ------------------------------ ------- ---------------- ------------ cxl x.x.x.x-1OEM.700.1.0.15843807 Chelsio VMwareCertified yyyy-mm-dd cheiscsi x.x.x.x-1OEM.700.1.0.15843807 Chelsio VMwareCertified yyyy-mm-dd cheiwarp x.x.x.x-1OEM.700.1.0.15843807 Chelsio VMwareCertified yyyy-mm-dd Driver Uninstallation ===================== NOTE: Before proceeding, please ensure that no iSCSI, iSER or NVMe-oF session or connection is active and running. a) Put the host in maintenance mode using the vSphere Client (Inventory->Host->Enter Maintenance Mode). b) Uninstall the drivers. [root@localhost:~] esxcli software component remove --component=Chelsio-Drivers Removal Result Components Installed: Components Removed: Chelsio-Drivers_x.x.x.x-1OEM.700.1.0.15843807 Components Skipped: Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true c) Reboot the host. [root@host:~] reboot ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Native Network Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== To use SR-IOV feature: - SR-IOV should be enabled in the BIOS. - Intel Virtualization Technology for Directed I/O (VT-d) should be enabled in the BIOS. - PCI Express Slot should be ARI capable. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cxl Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Multiple Adapters ================= By default, the cxl driver will initialize 8 Chelsio ports. In case of using multiple adapters, set the max_ports module parameter and reboot the machine. [root@host:~] esxcfg-module -s max_ports=N cxl [root@host:~] reboot E.g. - To use 3 Nos. of T540-CR (4-port) adapters, with a total of 12 Chelsio ports, [root@host:~] esxcfg-module -s max_ports=12 cxl [root@host:~] reboot cxgbtool ======== cxgbtool will be installed by default on installing the driver component. Usage: ------ To use cxgbtool, use the syntax: [root@host:~] /opt/chelsio/bin/cxgbtool NOTE: For information on available parameters and their usage, refer to cxgbtool help by running the "/opt/chelsio/bin/cxgbtool -h" command. Adapter Configuration ===================== The T5 adapter's configuration should be updated for optimal performance in ESXi environment. Run the following cxgbtool command and reboot the machine. [root@host:~] /opt/chelsio/bin/cxgbtool –c esxcfg –set [root@host:~] reboot Note: This is not required for T6 adapters. Firmware Update =============== Driver will auto-load the firmware (v1.24.17.0) if an update is required. The version can be verified using: [root@host:~] /opt/chelsio/bin/cxgbtool -c version Connecting a virtual machine ============================ To connect Chelsio adapter to a virtual machine: a) Create a new virtual switch: [root@host:~] esxcfg-vswitch -a vSwitchN b) Link a chelsio NIC to the newly created virtual switch: [root@host:~] esxcfg-vswitch -L vmnicN vSwitchN c) Create a new portgroup on the vSwitch: [root@host:~] esxcfg-vswitch -A vSwitchN d) From the vSphere client, right-click on the virtual machine, select the virtual network adapter to be used, and attach the newly created port group. Virtual Functions (SR-IOV) ========================== Instantiate Virtual Functions ----------------------------- Follow the steps mentioned below to instantiate virtual functions: a) Load the native network driver (cxl) with "max_vfs" parameter and set it to a comma separated value to specify the number of VFs per port. In case of multiple adapters, use ',,' to separate the number of VFs per adapter: [root@host:~] esxcfg-module cxl -s max_vfs=W,X,,Y,Z Where, W: Number of VFs per port 0 of adapter 0. X: Number of VFs per port 1 of adapter 0. Y: Number of VFs per port 0 of adapter 1. Z: Number of VFs per port 1 of adapter 1. NOTE: A maximum of 16 VFs can be instantiated per port. b) Verify max_vfs setting using the -g option: [root@host:~] esxcfg-module -g cxl c) Reboot the ESXi host for changes to take effect. d) Check if VFs were instantiated successfully on the PCI bus by either using the shell prompt (using lspci) or GUI. NOTE: - Unloading host network driver (cxl) when VFs are attached to VMs is not supported by VMware. - VMs with SR-IOV interface might not power on with "out of MSI-X vectors" message in vmkernel.log. To resolve this issue, please refer to VMware documentation. https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-880A9270-807F-4F2A-B443-71FF01DCC61D.html - T5 Adapters are not supported in Windows VMs with SR-IOV. Assigning VFs to VM -------------------- For detailed instructions on assigning and configuring VFs in VMs, please refer the User Guide. Driver Unloading ================ Execute the command below to unload the Native Network driver: [root@host:~] vmkload_mod -u cxl NOTE: If iSCSI, iSER or NVMe-oF Offload Initiator Driver is loaded, it needs to be unloaded before unloading the native network driver. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iSCSI Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Chelsio Unified Wire series of adapters are Independent Hardware iSCSI adapters. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiscsi Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) NOTE: Execute the below command to restore the Advanced Options of storage adapter after cheiscsi reload. [root@host:~] esxcfg-rescan -A Driver Configuration ==================== For information on how to configure iSCSI Offload Initiator, please refer User's Guide. Driver Unloading ================ Logout all the existing iSCSI sessions. Execute the command below to unload the iSCSI Offload Initiator driver: [root@host:~] vmkload_mod -u cheiscsi ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iSER Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Chelsio Unified Wire series of adapters support iSER protocol, a translation layer for operating iSCSI over RDMA transports, such as iWARP RDMA. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiwarp Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Driver Configuration ==================== For information on how to configure iSER Offload Initiator, please refer User's Guide. Driver Unloading ================ Logout all the existing iSER sessions. Execute the command below to unload the iSER Offload Initiator driver: [root@host:~] vmkload_mod -u cheiwarp ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NVMe-oF Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiscsi Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Driver Configuration ==================== Configure the NVMe target machine with the IP Address, Target name, disks etc. For information on how to configure the NVMe Target, please refer Chelsio Unified Wire for Linux User's Guide. IMPORTANT: Disable iwpmd service on the target machine. On RHEL7.X machines, use the below command. [root@host~]# systemctl stop iwpmd Connecting to NVMe target ------------------------- Follow the below procedure on NVMe Initiator machine to connect to the target. a) Log in to vCenter Server through vSphere Web Client using a web browser. b) If you have already created and configured the host intended to be used as initiator, skip to step (c) i. Under Hosts and Clusters, right-click and click New Datacenter… Provide a name and Click OK. ii. Right-click on the newly created datacenter and click Add Host... Follow onscreen instructions and provide information to add the host. Click Finish. c) Select the host and under the Configure tab, select Storage Adapters. This will display the list of available Chelsio storage adapters. d) In the Adapter Details section, click Network Settings tab and then Edit. e) Configure IPv4 address for the adapter and click OK. f) To use IPv6 address, use the below command. [root@host:~] /opt/chelsio/bin/cxgbtool -c chnet -set -6 -ipaddr -gw -plen -p g) Discover the target. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -D If -p is not specified, by default Port 0 will be used. While using IPv6, specify the target IPv6 address within []. Login to the target by specifying the target name. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -L -subnqn h) Rescan the storage adapter and the target LUNs will be visible. [root@host:~] esxcfg-rescan -A i) List the logged in targets. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tlist -p If -p is not specified, by default Port 0 will be used. j) All the available LUNs will be displayed in the Devices tab. These LUNs can now be attached to VMs or can be used to store VMs. Disconnecting from NVMe target ------------------------------ To logout or disconnect from all the targets, [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -LT -all If -p is not specified, by default Port 0 will be used. Driver Unloading ================ Logout of all the existing NVMe-oF sessions. Execute the command below to unload the driver: [root@host:~] vmkload_mod -u cheiscsi 5. Support Documentation ================================================================================ The documentation for this release can be found inside the /doc folder. It contains: - README - Release Notes - User's Guide 6. Customer Support ================================================================================ Logs collection --------------- In case of any issues, please collect the below logs: a) /var/log/vmkernel.log b) Adapter logs using the command, [root@host:~] /opt/chelsio/bin/cxgbtool -c cudbg -d all -f -a Ex: [root@localhost:~] /opt/chelsio/bin/cxgbtool -c cudbg -d all -f /productLocker/cudbg.dmp -a 0 Writing 51347516 bytes to /productLocker/cudbg.dmp In case of a kernel panic, please provide the vmkernel zdump from /var/core/ directory. Please contact Chelsio support at support@chelsio.com with all relevant logs for any issues regarding the product. ******************************************************************************** Copyright (C) 2020 Chelsio Communications. All Rights Reserved. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Chelsio Communications. Chelsio Communications assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system,or transmitted in any form or by any means without the express written consent of Chelsio Communications.