--------------------------------------
        NIC UTILITIES README
--------------------------------------

1)  Use help of each commands to get the combinations of the options/arg and command syntax.
2)  'install' command's -online options are not supported for BCM957504-P425D card.
    install command's behavior matches with that of bnxtnvm with BCM957504-P425D card and
    was not further tested with different card.
3)  Live patch requires a dedicated image.
4)  Output formatting of new syntax and legacy syntax has made to match the scrutiny conventions.
5)  A cold boot is needed after recovery FW update for any functionality to work. It's expected
    that bnxt_en functionality is broken for the device on which recovery is performed.
6)  A) The set of PFs used by kernel mode sliff driver/ NICCLI.
    B) The set of PFs bound to vfio-pci driver / used by DPDK should be mutually exclusive.
     i.e. no PF should be used by both the sets: A) & B) at any point of time. NICLI does not discover/list the PFs used by DPDK. A PF should not be bound to vfio-pci driver/used by DPDK while it's being operated on by niccli.
7)  bash shell kind of pattern matching & globbing features are not supported in interactive mode.
8)  Support for running multiple parallel instances of niccli commands is command specific even when
    scrutiny_cli/niccli support multiple parallel instances.
9)  Live Firmware Upgrade feature is deprecated.
10) For the manufacturing commands, the bnxt_en/bnxt_re driver should be unloaded.
    As there should no active driver instances running. The manufacturing commands require a reset of the
    CPU/Core after the operations are performed. Causing the firmware to be in stale state if there is an
    active driver instance is running on OS.
11) From release 2.31 onwards,
    - NICCLI, which is the end-customer tool will not use Sliff driver instead it uses LFC-L2 interface.
    - The LFC interface (/dev/bnxt_lfc) is created when out-of-box bnxt_en is loaded.
    - NICCLI uses the USHI interface when the inbox bnxt_en driver is loaded.
    - NICCLIF & NICCLID will continue to use Sliff Driver.

NOTES: A)Unless stated otherwise, the statements which refer to niccli are applicable to nicclid and nicclif and nicclilom as well.
         For brevity and sharing same code base all 3 utlities referred in the above statement are referred to as niccli, in general, in this document.
       B)niccli may refer to OS/platform specific executable binary.
         Eg. niccli.exe (for Windows) or niccli.freebsd(for FreeBSD) so on.

SUPPORTED PLATFORMS:
--------------------
1. Linux :
    - x86_64
    - aarch64
2. Windows :
    - x86_64
3. ESXI 7 onwards
    - x86_64
4. FreeBSD :
    - x86_64
5. UEFI :
    - x86_64
    - aarch64


INSTALLATION OF NICCLI:
-----------------------

NICCLI executable is delivered for Linux, Windows, VMWare and FreeBSD currently.

Linux OS (For x86_64 and aarch64):
----------------------------------

1) Installation of the niccli :

   RPM:
   ----
   a) Install the rpm using the below command :
       rpm -[i/-U]vh niccli<type>-<version>.x86_64.rpm
       rpm -[i/-U]vh niccli<type>-<version>.aarch64.rpm

   b) Above step installs the packaged files to the following locations.
      The installation of the rpm will happen in below following locations :
        niccli    : /opt/niccli/
        nicclif   : /opt/nicclif/
        nicclid   : /opt/nicclid/
        nicclilom : /opt/nicclilom/

      The contents of the folder are as mentioned below :
      NOTE : the contents are subjected to modification based on the requirements.

            niccli               :  niccli.x86_64 executable is replaced with "niccli" script that automatically loads (modprobe)
                                    the sliff driver prior to executing the application
            cmb_a.bin
            niccli<type>.<arch>  : <type> refers to type of binary(debug(d), factor(f), lom), if not then its a end cusotmer utility
            Readme.txt
            sr_a.bin
            th2_a_0x20000040_sbl.bin.fastboot
            th2_a_0x20000180_sbl.signed.crid0000.bin.fastboot
            th2_a_0x20000180_sbl.signed.crid0000.promoter.bin.fastboot
            th2_a_0x20000180_sbl.signed.crid0001.bin.fastboot
            th2_a_0x20000180_sbl.signed.crid8001.bin.fastboot
            th2_fb_fw.bin
            th_a_0x20000040_sbl.bin.fastboot
            th_a_0x20000360_sbl.signed.crid0000.bin.fastboot
            th_a_0x20000360_sbl.signed.crid0000.promoter.bin.fastboot
            th_a_0x20000360_sbl.signed.crid0001.bin.fastboot
            th_a_0x20000360_sbl.signed.crid8001.bin.fastboot

   TARBALL :
   ---------
   a) Untar the tarball
        - tar -xvf niccli<type>-<version>-linux_<arch>.tar.gz

   b) The above with create a folder niccli<type>-<version>-linux_<arch>
        niccli               :  niccli.x86_64 executable is replaced with "niccli" script that automatically loads (modprobe)
                                the sliff driver prior to executing the application
        th_a_0x20000360_sbl.signed.crid8001.bin.fastboot
        th_a_0x20000360_sbl.signed.crid0001.bin.fastboot
        th_a_0x20000360_sbl.signed.crid0000.promoter.bin.fastboot
        th_a_0x20000360_sbl.signed.crid0000.bin.fastboot
        th_a_0x20000040_sbl.bin.fastboot
        th2_fb_fw.bin
        th2_a_0x20000180_sbl.signed.crid8001.bin.fastboot
        th2_a_0x20000180_sbl.signed.crid0001.bin.fastboot
        th2_a_0x20000180_sbl.signed.crid0000.promoter.bin.fastboot
        th2_a_0x20000180_sbl.signed.crid0000.bin.fastboot
        th2_a_0x20000040_sbl.bin.fastboot
        sr_a.bin
        securefastboot.bin
        Readme.txt
        cmb_a.bin
        niccli<type>.<arch>  : <type> refers to type of binary(debug(d), factor(f), lom), if not then its a end cusotmer utility


2) Installation of the Sliff Driver rpm (from Release 2.31 onwards this step is required only for NICCLIF and NICCLID):
   a) Downlaod the appropriate the rpm based on your Linux distrubution and install using the below command :
      rpm -[i/-U]vh kmod-sliff-<version>.<linux_distrubution>.x86_64.rpm
      rpm -[i/-U]vh kmod-sliff-<version>.<linux_distrubution>.aarch64.rpm
      NOTE :
       "rpm -ivh" works only with first time installation.
       Run "rpm -qa | grep kmod" to check if the sliff is already installed or not.
       If the package is already installed, then user should run "rpm -Uvh" to upgrade the package.
       If user wants to erase the sliff package, then run "rpm -e *.rpm".
       If user had installed the sliff driver through make; make install, then user has to make sure that to remove the sliff driver from the path /lib/modules/`uname -r`/updates/sliff.ko to load the sliff driver from the rpm install path /lib/modules/`uname -r`/extra/sliff/sliff.ko


   b) Above step installs the packaged files to the following locations based on the platform (x86_64 or aarch64).
        /etc/depmod.d/sdrv.conf
        /lib/modules/4.18.0-372.9.1.el8.x86_64
        /lib/modules/4.18.0-372.9.1.el8.x86_64/extra
        /lib/modules/4.18.0-372.9.1.el8.x86_64/extra/sdrv
        /lib/modules/4.18.0-372.9.1.el8.x86_64/extra/sdrv/sliff.ko

3) Installation of the Sliff Driver source rpm/tar (from Release 2.31 onwards this step is required only for NICCLIF and NICCLID):
   a) sliff driver source rpm can be found in below two paths in SIT builds:
      i) /niccli/sliff/sliff-<version>.tar.gz
     ii) /niccli/sliff/KMP/<Linux_OS_distrubution>/<Linux_OS_version>/sliff-<version>-<LinuxOSVersion>.src.rpm

   b) Verify the Source rpm has the following files.
      $rpm -ql sliff-225.0.100.0-1.rhel8u7.src.rpm
      sliff-225.0.100.0.tar.gz
      sliff.files
      sliff.spec

   c) Install the yum-utils and rpm-build packages which are required to build the source rpms.
      $sudo yum install yum-utils rpm-build

   d) Check if there is any dependencies for building the Source packages.
      $yum-builddep sliff-225.0.100.0-1.rhel8u7.src.rpm

   e) Install the Source rpm using the rpm-build command with the below options. It compiles and installs the sliff driver  to the path /lib/modules/
      $rpmbuild -rb < SOURCE_RPM_FILE> or
      $rpmbuild --rebuild <SOURCE_RPM_FILE>

      At the end of the rpmbuild, you should see :
      Wrote: /root/rpmbuild/RPMS/x86_64/kmod-sliff-<version>.rhel8u7.x86_64.rpm

   f) To install the sliff driver, run the below command :
      rpm -ivh /root/rpmbuild/RPMS/x86_64/kmod-sliff-<version>.rhel8u7.x86_64.rpm

4) Following are the two ways to execute the NICCLI commands:
    a) run the "niccli" script, it automatically loads the appropriate sliff driver prior to executing the application and runs the "niccli.x86_64" binary.
       If sliff driver is not installed, then script throws an error.

    b) If user is directly running the "niccli.x86_64" binary, then make sure that to load the appropriate .ko file based on the Linux Distrubution through the insmod command.

5) For the Non-standard Linux distrubutions/Customized kernel, use the sliff driver source files rpm (install the sliff driver source rpm to get the source files) and follow the below steps :
    a) Refer the steps #4 and #5

6) Installation of the niccli .deb package:
     a) Install the .deb using the below command :
         sudo dpkg -i <.deb package>

------------
Windows OS :
------------

sliff driver for windows (.sys) is located in Windows/sdrv/win/sliff.sys

1) sliff driver is automatically loaded into system by the Windows niccli executable.

------------
VMWare OS :
------------

1) NICCLI does not depend on the sliff driver in VMWare OS. NICCLI interacts with the L2(bnxtnet) driver in order to communicate with the firmware.

2) NICCLI VIB installation with --no-sig-check option fails if the system is in secure boot mode.

3) In ESXi 8.0 Plugin model all the files generated by the command outputs will be saved at the /var/log/vmware location.

4) Plugin commands and niccli commands have different syntaxes and parameter counts. Hence, "help" command is not added as the esxcli plugin framework offers help based on the available namespaces and corresponding commands/command options can be queried accordingly.

Steps to Install niccli VIB and ZIP package:
--------------------------------------------

1) To install the niccli VIB package, run the below command
    esxcli software vib install -v <VIB_PACKAGE> --no-sig-check

2) To install the niccli ZIP package, run the below command
    esxcli software vib install -d <ZIP_PACKAGE> --no-sig-check

Steps to Uninstall niccli VIB/ZIP package:
------------------------------------------

1) To uninstall the niccli VIB/ZIP package, run the below command
    esxcli software vib remove -n niccli

niccli Plugin format command syntax and example:
------------------------------------------------

esxcli niccli <command> -c <connection_type> -v <connection_type_value> [command options]

 -c                    : Indicates connectiontype
 connection_type       : Value for connectiontype. Supported values are [dev|i|pci]
 -v                    : Indicates connectiontype value
 connection_type_value : Supported values are index_number, PF MAC Address and PCI Address

Examples:

1) esxcli niccli install -c i -v 1 -p=/BCM957414A4141DDLP.pkg
2) esxcli niccli install -c dev -v 1 -p=/BCM957414A4141DDLP.pkg
3) esxcli niccli install -c dev -v BC:97:E1:70:14:10 -p=/BCM957414A4141DDLP.pkg
4) esxcli niccli install -c pci -v 0000:86:00.00 -p=/BCM957414A4141DDLP.pkg

------------
FreeBSD OS :
------------

1. Download the niccli rpm from the SIT release under the niccli/FreeBSD folder.
2. Before running the niccli executable or the script, make sure that bnxt_en/L2 driver and bnxt_mgmt driver is installed in the System
3. Only the niccli has the support for FreeBSD. niccli-f does not have support for the FreeBSD.


NICCLI EXECUTION AND USAGE OF THE COMMANDS:
-------------------------------------------

1) Download the package from the Release Order and unzip it.

2) Run the niccli --list (Linux) or "niccli.exe --list" (Windows) command to list the devices in the system.
   Method of executing/running binary is specific to respective OS/platform.

3) There are three modes of operation in the NICCLI

    a) Interactive Mode:
        To launch in interactive mode :
         <Scrutiny NIC CLI executable> [-i <index of the target>] |[-pci <NIC pcie address>]

        After launching in interactive mode, execute 'help' command to
        display the list of available commands.

    b) Oneline Mode:
        To launch in Oneline mode :
         <Scrutiny NIC CLI executable> [-i <index of the target>] | [-pci <NIC pcie address> <command>]
        To list available commands in Oneline mode :
         <Scrutiny NIC CLI executable> [-i <index of the target>] | [-pci <NIC pcie address>] help
        Legacy Nic command syntax :
        To launch in Oneline mode :
         <Scrutiny NIC CLI executable> [-dev [<index of the target> | <mac address> | <NIC pcie address>]] <command>
        To list available commands in Oneline mode :
         <Scrutiny NIC CLI executable> [-dev [<index of the target> | <mac address> | <NIC pcie address>]] help


    c) Batch Mode:
        To launch in batch mode :
         <Scrutiny NIC CLI executable> [-i <index of the target>] | [-pci <NIC pcie address>] --batch <batch file>

        NOTE: Batch mode requires flat text file with utility supported commands.
        Commands have to be provided in ascii format with the valid parameters.
        Supported commands can be listed using One-Line mode or Interactive mode
        Upon failure of any commands, utility will exit without continuing with other commands


    NOTE:
    ------

    1) '-eth' interface mode of communication is supported in future releases.
    2) '-dev' currently supports index, mac_addr & pci-bdf of the target. Eg : niccli -dev <index> <mac addr> <b:d:f> command.
    3) The pci address will be displayed as Domain:Bus:Device:Function.
    4) Niccli get the part number from the register space.
    5) Only applicable commands are listed for the Non-operational cards. Run help to see the supported commands.
    6) When the card is unidentified (PCI config space corrupt), user has to perform the 'install -rescue' twice. User should use the proper package file.
               - install -rescue <nvm-package.pkg>
               - install -rescue <firmware-package.pkg>
           - make sure that you use exactly matching/appropriate package in rescue mode of install.

4) To see the list of supported commands, run the below command :
   niccli help


EXAMPLE:
--------

[root@<hostname> niccli]# ./niccli.x86_64 -i 1 help nvm

-------------------------------------------------------------------------------
Scrutiny NIC CLI v230.0.100.0 - Broadcom Inc. (c) 2024 (Bld-85.52.34.106.16.0)
-------------------------------------------------------------------------------

NVM configuration option of a device
This command provides :
     - Display the current settings of the NVM configuration.
     - Configure the current settings of the NVM configuration.
     - Save the current settings of the NVM configuration.
     nvm -getoption [<option name> | <option name> -h]  |
          -setoption <option name> -value <value> [-scope <scope index>] |
          -saveoptions -file <filename>
          -getoption   : Get NVM configuration option of a device.
          -setoption   : Set NVM configuration option of a device.
          -saveoptions : Save NVM configuration options on the device to a file.
          -scope      : The scope can be either of 'function' or 'port' index.
          -value      : The value for the specified option.
          -h          : Detailed help for each NVM configuration option.
          -file       : Input file name.

Example :
     nvm -getoption
     nvm -getoption an_protocol -h
     nvm -getoption an_protocol -scope 0
     nvm -getoption afm_rm_resc_strategy
     nvm -setoption an_protocol -value 1 -scope 0
     nvm -saveoptions -file output.txt


KNOWN LIMITATIONS/ISSUES & USAGE GUIDELINES:
--------------------------------------------

    OTHERS:
    -------
    1)  Command classification needs to done based on generic, debug and factory which is not in the current scope of the regular release.
    2)  'saveoptions' command output file includes all the legacy debug and generic commands.
    3)  Currently, nic-cli framework does not support running multiple instances in parallel, sometimes it may lead to system crash and this scenario should be avoided.
    4)  Internal logs :
        1) 'scrutiny.ini' file captures the logs for the scruitny library and to capture the logs for the CLI layer alone , user has to specify
          any of the following options in the cli command,

          A) niccli --verbose NULL <command>
             -  Prints the verbose logs on the console.

          B) niccli --debug NULL <command>
             - Prints the debug logs on the console.

          C) niccli --verbose file.txt <command>
             - Redirects the verbose logs into the file specified.

          D) nicli --debug file.txt <command>
             - Redirects the debug logs into the file specified.

        2) Running any niccli command with 'scrutiny.ini' file being present in the same directory as niccli executable, will affect the overall completion time
           for the command. This is because debug logging will get enabled when 'scrutiny.ini' file is present. It is recommened to used only for debugging
           purpose.

    5)  With the combination of THOR and other cards, we are seeing the communication going to a different
        PF, in that scenario please remove the device and perform the command operation. This is under scrutiny.
    6)  Few commands are not supported on Windows due to the dependency of Ethtool, Curl library and Sysfs path.
        These commands will be enabled in the subsequent releases.
    7)  A) niccli does not support -save option currently. And also setting some tunnel nvm options like ipv4ovxlan_udp_port fil as firmware does not allow it.
           So, it's not recommended to test -save option with tunnel commands as of now.

        B) Current add_tunnel_redirect command has a bug. When different VF was already configured and
           if we try to configure again with different tunnel type and VF, it always says VF0(fixed) was already
           configured even though what we previously configured is not always VF0.
    8)  Command 'devid=<package file name>' is planned for future releases and it will be implemented through enhancement request.
        Hence current '-i <index number> devid' is supported. Once the ER is implemented, we will remove this statement from Readme file.
    9)  In interactive mode if the card is not in operational state or corrupted, after the fastboot re-launch the application for the supported list of commands.
    10) From the application side we are enabling the NVM simplification in NICCLI build as FW has already delivered the changes.
        Application now will support only the NVM Options that is defined in FW 'nvm_option_table.h' header as application will directly consume the same from firmware.
        Testing with prior FW releases we may observe some of the NVM options may not be supported as those options are not explicitly is added in the auto generated NVM
        option header file.
    11) As part of NVM Simplification few observations are evident. They are.

        A) The 'getoption' (without any nvm option name) command output will be chip based. It will not display all the nvm options as specified in
           firmware generated nvm_option_table.h file.
        B) The 'setoption', 'getoption' and 'saveoptions' commands output will be specific for the selected NIC controller.
        C) The NVM options may be supported for the latest release firmware build which consumes the latest nvm_option_table header file. For older firmware, application will
           still get unsupported response from lower layer. Hence those will not be listed in supported command output in respective commands.

        Once the debug build is validated by all the stake holders, NICCLI will remove this limitations.

    12) Provided NICCLIF binary is not tested, and discussion are going on regarding BNXT-MT commands for windows.
    13) White space characters other than plain space like tab etc. are not supported as argument separators in the interactive mode.
    14) As of now Linux sliff driver is not signed. When kernel lockdown is enabled(Eg.by secure boot),  sliff.ko can't loaded.
                  Please disable kernel lockdown to load the driver module.
                  Eg. mokutil --disable-validation
                            [OR]
                         echo 1 > /proc/sys/kernel/sysrq
                         echo x > /proc/sysrq-trigger

    15) Interrupting/killing niccli during the middle of some operations may result in unknown/undefined/unexpected behavior.
    16) Performing some management plane operations in parallel while data path/plane and/or bnxt_en/bnxt_re module load/unload is in progress may result in
        HW errors and/or unknown/undefined/unexpected behavior.
    17) Some NIC adapters does not support install -online as printed by the error message.
    18) Fastboot firmware loaded by repaving/rescue/recovery path may cause HWRM errors(as seen in dmesg) from data path
        drivers like bnxt_en unless they are unloaded.
    19) unning niccli when the Traffic Control flow operations are executed in different terminal simultaneously may cause some of the HWRM request may get
        starved or black-holed. During this time, Bnxt_en driver displays the message similar to this "bnxt_en 0000:b1:00.0 ens45np0: Error (timeout: 2000015) msg {0x2da 0x5703} len:0" but bnxt_en driver recovers after some retries. As it is not a harmful message, it should be ignored.
    20) 'fwcli' command's help is generated by firmware and hence, tools have no control over it.
    21) For DCB commands i.e. pfc, apptlv, up2tc, getqos, ets, listmap, dscp2prio and tcrlmt to work, user has disable the following nvm options "lldp_nearest_bridge",
        "lldp_nearest_non_tpmr_bridge" and "dcbx_mode".
    22) In FreeBSD for DCB commands to work, user has to make sure logical link of the interface should be UP. This is BSD layer2 driver (if_bnxt.ko) limitation.
    23) When the niccli tool is peforming any operation and user removes the device in parallel, user can encounter an abnormal behavior in the tool output and
        system may sometime experience crash due to the non-availability of the PCIe BARs of removed device.
    24) After each individual command, bnxtmt framework executes following command sequence:
        "reset all;delay ms 1000;pci init"
        This sequence of operations should be performed either manually or by higher level bc.sh counter part script for nicclif.
        nicclif command like prbs_test may not exactly behave like bnxtmt command counter part.This is not the limitation/bug
        of individual nicclif command(s).
    25) Some commands may need fast boot & other initialization like fwutil before execution.
        This initialization sequence should be performed
        either manually or by higher level bc.sh counter part script for nicclif.
    26) Scrutiny compatible command: mem is counter part of legacy bnxtmt's reg command by default.
    27) It's recommended to fastboot FW before testing niccli commands on Thor2 FPGA as nvm boot msy not be reliable on all FPGAs yet.
    28) niccli is base on scrutiny framework. So, while displaying structure fields, output format may be different in some scenarios.
        Eg. bnxtmt/lcdiag may display fields using under_score(_) and niccli may use CamelCase instead.
    29) A)Better erase nvm Eg. using bnxtmt command: nvm clear, before trying install command on FPGA as HWRM_INSTALL_UPDATE  may fail
         due to fragmentation or no space errors.
        B)fastboot the FPGA even before any variant of install command.
    30) "PHY Temperature" field displayed in the device_temperature command is not supported on Thor2 adapters.
    31) To boot with installed package from the NVM flash after the fastboot operation, user need to follow one of the below mentioned steps,
        A) Perform cold boot or issue "reset -all" command for the controller to boot from the NVM flash.
        B) For LOM controller, a Power Off/On may be required for the controller to boot from the NVM flash.
    32) For "pf_for_host_mem_chdmp" as per nvm option's table allowed range, user can pass any PF index from range 0 to 15 irrespective of the number
        of PF available for that device. This nvram setting was impacting the availability of the product's service/functionality to end-user.
        Since firmware cannot afford parameter validation, application has to add it in its layer. Hence this will result in difference of behavior
        when compared with bnxtnvm for the same nvram option.
    33) Multi Host mode: Before configuring the device to multihost or singlehost, the OS driver (bnxt_en and bnxt_re) should be disable/removed.
        After the change from multihost to singlehost and viceversa, a mandatory cold boot is required.
    34) The error codes will be supported only for the online mode, the interactive mode and the batch mode will not have error codes displayed.
    35) If User encounters below error while running the niccli executable on Linux Systems, Please follow the below steps to avoid the below error:
        "/opt/niccli/niccli.x86_64: /lib/x86_64-linux-gnu/libnl-3.so.200: no version information available (required by /opt/niccli/niccli.x86_64)"
        A). Identify the libnl version installed on your Build Systems.
        B). Install 3.2.28 version of libnl3 (libnl3 and libnl-devel)
           a.    Source code can be found at below any one of the links
                    - https://www.infradead.org/~tgr/libnl/files/
                    - https://snapshot.debian.org/package/libnl3/3.2.21-1/
                    - https://snapshot.debian.org/package/libnl3/
                    - https://www.linuxfromscratch.org/blfs/view/7.10/basicnet/libnl.html
           b.    Untar the source code and run the below commands
                    - ./configure --prefix=/usr --sysconfdir=/etc --disable-static
                    - make
                    - make install.
           c.    Run export LD_LIBRARY_PATH=<installed library path>
                    - Check the make install logs for the location where the library is installed.
           d.    Then run the niccli
        C). If the user is fine with ignoring the “no version information available”, then they can skip #B.
    36) During recovery/fastboot the bnxt_re driver should be disable/unloaded when sliff driver is present. User message is already provided during
        the execution of the fastboot step, as niccli will think that active driver instance has been removed.
    37) MAC programming (nvmmacprg command ) by default programs the factory cfg region. If need to be overriden then '-factory' sub option needs to
        be used. For WH+ since there is no factory region, by default the programming will done in the syscfg region.
    38) Some commands like add_ntuple_filter may fail, but they match the behavior of bnxtnvm.
        Such failures are due to firmware limitations.
    39) Unless -all option is explicitly specified, resmgmt command works based on it's default mode
        (for a selected single PF or for all PFs/entire NIC device) which is printed when it's
        executed or it's help is displayed. -all option performs the given command/operation
        for all the PFs of the given NIC device.
    40) Since niccli tool communicates with the firmware through L2 driver, Hence pcie device hot plug events are taken care. Also, the Coexistence
        of Sliff along with L2(bnxt_en) driver can lead to undesirable behaviors as both claim the device.
    41) Multi Host mode: Before configuring the device to multihost or singlehost, the OS driver (bnxt_en and bnxt_re) should be disable/removed.
        After the change from multihost to singlehost and viceversa, a mandatory cold boot is required.
    42) The error codes will be supported only for the online mode, the interactive mode and the batch mode will not have error codes displayed.
    43) When there a PCIe Add/remove events, the Sliff driver being a simple character driver, it is not advisable to handle in a simple character driver.PCIe
        device hot-plug events (add/remove) are events that happen at a lower lever in the Linux kernel, typically handled by PCIe subsystem drivers and not
        directly by the device-specific drivers like character drivers (sliff driver). Hence, one of the challenges is that the Coexistence of Sliff along
        with bnxt_en can lead to undesirable behaviors as both claim the device. If any issues seen due to PCIe device hot plug events, then workaround is
        to rmmod the sliff and insmod the sliff.
    44) The ASPM (Active State Power Management) setting for a PCIe device is generally disabled in most cases to conserve power when the PCIe is not in use ( L3 state - Lower power state). Enable the ASPM in BIOS before launching the nicclif tool to detect the NIC cards.

    VMWARE LIMITATIONS:
    -------------------
    1) In interactive mode, editing the command by moving the cursor using the left/right arrow will not work. The user has to re-issue the command.

        Separate loggings makes the debugging easier for the developers.
        Since it supports Storage, switch, expanders, NIC products through different ways of communication interfaces,
        we cannot change the existing logging mechanism design/functionality alone for NIC.

    2) This design is well approved by all architects.

    3) Interactive mode will not be supported in niccli Plugin model.

    4) User has to provide an option "-s" (slient mode) for the command named "restorefactorydefaults" which require user confirmation 'Yes/No'.

    UEFI LIMITATIONS :
    ------------------
    1) Since HWRM_NVM_WRITE does not support the Batch mode ( breaking the FW package into 64Kb chunks ), We cannot support the “nvmpkgprg” command in UEFI as contiguous  allocation of memory more than 64KB has some limitations.
    2) Alternate command to install the package on blank card is "install -rescue <FW.pkg> -force"

        USHI LIMITATIONS :
        ------------------
            1. In Linux OS, When the secure boot is enabled the adapter configuration/query commands using niccli will not work as the mapping to the PCI BAR is not allowed by the OS.
            2. Running multiple instances of niccli at the same time can result in the unexpected outputs and command timeouts.
            3. In multihost environment, Race conditions can occur if more than one host attempts to utilize the USHI channel at the same time and this can result in the corruption
               of the control and data registers, timeouts, etc.
            4. When the kernel configuration parameter CONFIG_IO_STRICT_DEVMEM=y is enabled and inbox bnxt_en driver is loaded, niccli adapter configuration/query commands will not work.
               This is because from the user space niccli cannot map the PCI BAR to access the hardware. Below are the two Work around's for this issue.
               A) Unbind the L2 driver from the PF.
               B) Enable the iomem=relaxed in the grub and reboot the server.
            5. niccli adapter configuration/query commands in the guest OS or VM can cause the unexpected outputs and command timeouts when the guest OS is loaded with the inbox bnxt_en
               driver and the PF is binded to the vfio-pci driver in the hypervisor and is attached to the guest OS or VM. In this case, guest OS or VM should be loaded with the out-of-box
               bnxt_en driver.
