**************************************** README **************************************** Chelsio Unified Wire for VMware ESXi 7.0 Version : 5.3.0.11 (Beta) Date : 04/20/2020 Overview ================================================================================ The Chelsio Unified Wire package installs various drivers and utilities and consists of the following software: - Native Network (NIC) Driver with SR-IOV support - iSCSI Offload Initiator Driver - iSER Offload Initiator Driver - NVMe-oF Offload Initiator Driver NOTE: Drivers are not VMware certified. ================================================================================ CONTENTS ================================================================================ - 1. Supported Operating Systems - 2. Supported Cards - 3. How to Use - 4. Support Documentation - 5. Customer Support 1. Supported Operating Systems ================================================================================ Host ==== - ESXi 7.0 Virtual Machine (with VFs): =========================== - RHEL 8.0, 4.18.0-80.el8 - RHEL 7.7, 3.10.0-1062.el7 - RHEL 7.6, 3.10.0-957.el7 - RHEL 7.5, 3.10.0-862.el7 - RHEL 6.10, 2.6.32-754.el6 - SLES 15 SP1, 4.12.14-195-default - SLES 12 SP4, 4.12.14-94.41-default - Ubuntu 18.04.3, 4.15.0-55-generic - Ubuntu 16.04.6, 4.4.0-142-generic - Kernel.org linux-4.19.98 - Kernel.org linux-4.14.167 NOTE: Windows Guest is not supported with SR-IOV. 2. Supported Cards ================================================================================ |########################|#####################################################| | Chelsio Adapter | Driver/Software | |########################|#####################################################| |T62100-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T62100-LP-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T6225-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T6225-LL-CR |NIC,iSCSI,iSER,NVMe-oF | |------------------------|-----------------------------------------------------| |T520-CR |NIC,iSCSI,iSER | |------------------------|-----------------------------------------------------| Memory-free Adapters ==================== |########################|#####################################################| | Chelsio Adapter | Driver/Software | |########################|#####################################################| |T6225-SO-CR |NIC,iSCSI*,iSER*,NVMe-oF* | |------------------------------------------------------------------------------| * 256 IPv4/128 IPv6 offload connections supported. 3. How to Use ================================================================================ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Unified Wire ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Installation =================== a) Download the driver package from https://service.chelsio.com. b) Put the host in maintenance mode using the vSphere Client (Inventory->Host->Enter Maintenance Mode) c) Install the drivers. [root@host:~] cp *.zip /productLocker/ [root@host:~] cp *.zip /var/log/vmware/ [root@host:~] esxcli software component apply --depot=/productLocker/VMW-esx-7.0.0-Chelsio-Drivers-x.x.x.x-15525992.zip --no-sig-check Installation Result Components Installed: Chelsio-Drivers_x.x.x.x-15525992 Components Removed: Components Skipped: Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true d) After installation/update completes successfully, exit from maintenance mode and reboot the system. e) Verify that the drivers are installed successfully. [root@host:~] esxcli software component vib list --component=Chelsio-Drivers Name Version Vendor Acceptance Level Install Date -------- ------------------------------ ------- ---------------- ------------ cxl 5.3.0.11-1OEM.700.1.0.15525992 Chelsio VMwareCertified 2020-04-15 cheiscsi 5.3.0.11-1OEM.700.1.0.15525992 Chelsio VMwareCertified 2020-04-15 cheiwarp 5.3.0.11-1OEM.700.1.0.15525992 Chelsio VMwareCertified 2020-04-15 Driver Uninstallation ===================== NOTE: Before proceeding, please ensure that no iSCSI, iSER or NVMe-oF session or connection is active and running. a) Put the host in maintenance mode using the vSphere Client (Inventory->Host->Enter Maintenance Mode). b) Uninstall the drivers. [root@localhost:~] esxcli software component remove --component=Chelsio-Drivers Removal Result Components Installed: Components Removed: Chelsio-Drivers_x.x.x.x-15525992 Components Skipped: Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true c) Reboot the host. [root@host:~] reboot ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Native Network Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Pre-requisites ============== To use SR-IOV feature: - SR-IOV should be enabled in the BIOS. - Intel Virtualization Technology for Directed I/O (VT-d) should be enabled in the BIOS. - PCI Express Slot should be ARI capable. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cxl Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Multiple Adapters ================= By default, the cxl driver will initialize 8 Chelsio ports. In case of using multiple adapters, set the max_ports module parameter and reboot the machine. [root@host:~] esxcfg-module -s max_ports=N cxl [root@host:~] reboot E.g. - To use 3 Nos. of T540-CR (4-port) adapters, with a total of 12 Chelsio ports, [root@host:~] esxcfg-module -s max_ports=12 cxl [root@host:~] reboot cxgbtool ======== cxgbtool will be installed by default on installing the driver component. Usage: ------ To use cxgbtool, use the syntax: [root@host:~] /opt/chelsio/bin/cxgbtool NOTE: For information on available parameters and their usage, refer to cxgbtool help by running the "/opt/chelsio/bin/cxgbtool -h" command. Adapter Configuration ===================== The T5 adapter's configuration should be updated for optimal performance in ESXi environment. Run the following cxgbtool command and reboot the machine. [root@host:~] /opt/chelsio/bin/cxgbtool –c esxcfg –set [root@host:~] reboot Note: This is not required for T6 adapters. Firmware Update =============== Driver will auto-load the firmware (v1.24.14.0) if an update is required. The version can be verified using: [root@host:~] /opt/chelsio/bin/cxgbtool -c version Connecting a virtual machine ============================ To connect Chelsio adapter to a virtual machine: a) Create a new virtual switch: [root@host:~] esxcfg-vswitch -a vSwitchN b) Link a chelsio NIC to the newly created virtual switch: [root@host:~] esxcfg-vswitch -L vmnicN vSwitchN c) Create a new portgroup on the vSwitch: [root@host:~] esxcfg-vswitch -A vSwitchN d) From the vSphere client, right-click on the virtual machine, select the virtual network adapter to be used, and attach the newly created port group. Virtual Functions (SR-IOV) ========================== Instantiate Virtual Functions ----------------------------- Follow the steps mentioned below to instantiate virtual functions: a) Load the native network driver (cxl) with "max_vfs" parameter and set it to a comma separated value to specify the number of VFs per port. In case of multiple adapters, use ',,' to separate the number of VFs per adapter: [root@host:~] esxcfg-module cxl -s max_vfs=W,X,,Y,Z Where, W: Number of VFs per port 0 of adapter 0. X: Number of VFs per port 1 of adapter 0. Y: Number of VFs per port 0 of adapter 1. Z: Number of VFs per port 1 of adapter 1. NOTE: A maximum of 16 VFs can be instantiated per port. b) Verify max_vfs setting using the -g option: [root@host:~] esxcfg-module -g cxl c) Reboot the ESXi host for changes to take effect. d) Check if VFs were instantiated successfully on the PCI bus by either using the shell prompt (using lspci) or vSphere GUI (under Host > Configuration > Advanced setting) NOTE: - Unloading driver when VFs are attached to VMs is not supported by VMware. - VMs with SR-IOV interface might not power on with "out of MSI-X vectors" message in vmkernel.log. To resolve this issue, you need to add "pciPassthru.maxMSIXvectors" parameter to VMs configuration file. Maximum value allowed for this param is 31. It is recommended to set the value according to the following equation: pciPassthru.maxMSIXvectors = + 2 For more information refer to: http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.networking.doc/GUID-880A9270-807F-4F2A-B443-71FF01DCC61D.html - Windows Guest is not supported with SR-IOV. Assigning VFs to VM -------------------- For instructions on how to assign virtual functions to a virtual machine, please refer to VMware's official documentation at http://pubs.vmware.com/vsphere-60/index.jsp?topic=%2Fcom.vmware.vsphere.networking.doc%2FGUID-CC021803-30EA-444D-BCBE-618E0D836B9F.html Using VFs in Linux VM ---------------------- For instructions on how to use the VFs in a Linux VM, please refer the User's Guide. Driver Unloading ================ Execute the command below to unload the Native Network driver: [root@host:~] vmkload_mod -u cxl NOTE: If iSCSI, iSER or NVMe-oF Offload Initiator Driver is loaded, it needs to be unloaded before unloading the native network driver. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iSCSI Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Chelsio Unified Wire series of adapters are Independent Hardware iSCSI adapters. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiscsi Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) NOTE: Execute the below command to restore the Advanced Options of storage adapter after cheiscsi reload. [root@host:~] esxcfg-rescan -A Driver Configuration ==================== For information on how to configure iSCSI Offload Initiator, please refer User's Guide. Driver Unloading ================ Logout all the existing iSCSI sessions. Execute the command below to unload the iSCSI Offload Initiator driver: [root@host:~] vmkload_mod -u cheiscsi ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ iSER Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The Chelsio Unified Wire series of adapters support iSER protocol, a translation layer for operating iSCSI over RDMA transports, such as iWARP RDMA. Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiwarp Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Driver Configuration ==================== For information on how to configure iSER Offload Initiator, please refer User's Guide. Driver Unloading ================ Logout all the existing iSER sessions. Execute the command below to unload the iSER Offload Initiator driver: [root@host:~] vmkload_mod -u cheiwarp ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ NVMe-oF Offload Initiator Driver ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Driver Loading ============== After rebooting the ESXi Host, the driver will load automatically. However, it is possible to manually load the driver by using the command below: [root@host:~] vmkload_mod cheiscsi Execute the below command so that device manager performs a rescan: [root@host:~] kill -SIGHUP $(cat /var/run/vmware/vmkdevmgr.pid) Driver Configuration ==================== Configure the NVMe target machine with the IP Address, Target name, disks etc. For information on how to configure the NVMe Target, please refer Chelsio Unified Wire for Linux User's Guide. IMPORTANT: Disable iwpmd service on the target machine. On RHEL7.X machines, use the below command. [root@host~]# systemctl stop iwpmd Connecting to NVMe target ------------------------- Follow the below procedure on NVMe Initiator machine to connect to the target. a) Log in to vCenter Server through vSphere Web Client using a web browser. b) If you have already created and configured the host intended to be used as initiator, skip to step (c) i. Under Hosts and Clusters, right-click and click New Datacenter… Provide a name and Click OK. ii. Right-click on the newly created datacenter and click Add Host... Follow onscreen instructions and provide information to add the host. Click Finish. c) Select the host and under the Configure tab, select Storage Adapters. This will display the list of available Chelsio storage adapters. d) In the Adapter Details section, click Network Settings tab and then Edit. e) Configure IP for the adapter and click OK. You can configure either IPv4 or IPv6 or both. f) Discover the target. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -D If -p is not specified, by default Port 0 will be used. While using IPv6, specify the target IPv6 address within []. Login to the target by specifying the target name. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -L -subnqn g) Rescan the storage adapter and the target LUNs will be visible. [root@host:~] esxcfg-rescan -A h) List the logged in targets. [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tlist -p If -p is not specified, by default Port 0 will be used. i) All the available LUNs will be displayed in the Devices tab. These LUNs can now be attached to VMs or can be used to store VMs. Disconnecting from NVMe target ------------------------------ To logout or disconnect from all the targets, [root@host:~] /opt/chelsio/bin/cxgbtool -c nvmf -tport -ipaddr -p -LT -all If -p is not specified, by default Port 0 will be used. Driver Unloading ================ Logout of all the existing NVMe-oF sessions. Execute the command below to unload the driver: [root@host:~] vmkload_mod -u cheiscsi 5. Support Documentation ================================================================================ The documentation for this release can be found inside the /doc folder. It contains: - README - Release Notes - User's Guide 6. Customer Support ================================================================================ Please contact Chelsio support at support@chelsio.com for any issues regarding the product. ******************************************************************************** Copyright (C) 2020 Chelsio Communications. All Rights Reserved. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Chelsio Communications. Chelsio Communications assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system,or transmitted in any form or by any means without the express written consent of Chelsio Communications.