61. VF RSS - Configuring Hash Function Tests

This document provides test plan for testing the function of Fortville: Support configuring hash functions.

61.1. Prerequisites

  • 2x Intel? 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC)
  • 1x Fortville_eagle NIC (4x 10G)
  • 1x Fortville_spirit NIC (2x 40G)
  • 2x Fortville_spirit_single NIC (1x 40G)

The one port of the 82599 connect to the Fortville_eagle; The one port of Fortville_spirit connect to Fortville_spirit_single. The three kinds of NICs are the target NICs. the connected NICs can send packets to these three NICs using scapy.

61.2. Network Traffic

The RSS feature is designed to improve networking performance by load balancing the packets received from a NIC port to multiple NIC RX queues, with each queue handled by a different logical core.

  1. The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.)
  2. A hash calculation is performed. The Fortville supports three hash function: Toeplitz, simple XOR and their Symmetric RSS.
  3. Hash result are used as an index into a 128/512 entry ‘redirection table’.
  4. Niantic VF only support simple default hash algorithm(simple). Fortville NIC support all hash algorithm only used dpdk driver on host. when used kernel driver on host, fortville nic only support default hash algorithm(simple).

The RSS RETA update feature is designed to make RSS more flexible by allowing users to define the correspondence between the seven LSBs of hash result and the queue id(RSS output index) by them self.

61.2.1. Test Case: test_rss_hash

The following RX Ports/Queues configurations have to be benchmarked:

  • 1 RX port / 4 RX queues (1P/4Q)

61.3. Testpmd configuration - 4 RX/TX queues per port

testpmd -c 1f -n 3  -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff

61.4. Testpmd Configuration Options

By default, a single logical core runs the test. The CPU IDs and the number of logical cores running the test in parallel can be manually set with the set corelist X,Y and the set nbcore N interactive commands of the testpmd application.

  1. Got the pci device id of DUT, for example:

    ./dpdk_nic_bind.py --st
    
    0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
    0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
    
  2. Create 2 VFs from 2 PFs:

    echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs
    echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs
    ./dpdk_nic_bind.py --st
    
    0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
    0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
    0000:81:02.0 'XL710/X710 Virtual Function' unused=
    0000:81:0a.0 'XL710/X710 Virtual Function' unused=
    
  3. Detach VFs from the host, bind them to pci-stub driver:

    /sbin/modprobe pci-stub
    

    using lspci -nn|grep -i ethernet got VF device id, for example “8086 154c”:

    echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
    echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind
    echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind
    
    echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id
    echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind
    echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
    

or using the following more easy way:

virsh nodedev-detach pci_0000_81_02_0;
virsh nodedev-detach pci_0000_81_0a_0;

./dpdk_nic_bind.py --st

0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused=
0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=
0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=

it can be seen that VFs 81:02.0 & 81:0a.0 ‘s drv is pci-stub.

  1. Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:

    /usr/bin/qemu-system-x86_64  -name vm0 -enable-kvm \
    -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \
    -device pci-assign,host=81:02.0,id=pt_0 \
    -device pci-assign,host=81:0a.0,id=pt_1
    
  2. Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, set it in mac forward mode:

    ./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
    
  3. Reta Configuration. 128 reta entries configuration:

    testpmd command: port config 0 rss reta (hash_index,queue_id)
    
  4. Pmd fwd only receive the packets:

    testpmd command: set fwd rxonly
    
  5. Rss received package type configuration two received packet types configuration:

    testpmd command: port config 0 rss ip/udp/tcp
    
  6. Verbose configuration:

    testpmd command: set verbose 8
    
  7. Start packet receive:

    testpmd command: start
    
  8. Send packet and check rx port received packet by different queue. different hash type send different packet, example hash type is ip, packet src and dts ip not different:

    sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.4", dst="192.168.0.5")], iface="eth3")
    sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.5", dst="192.168.0.4")], iface="eth3")
    

61.4.1. Test Case: test_reta

This case test hash reta table, the test steps same with test_rss_hash except config hash reta table

Before send packet, config hash reta,512(niantic nic have 128 reta) reta entries configuration:

testpmd command: port config 0 rss reta (hash_index,queue_id)