NSX-T: Part 5 – KVM Fabric integration

Decepticon

 

Lab env (nested)

Perform a CentOS 7.6 installation with 4 vNICS and expose HW assited virtualization to the guest OS.

kvm1

 

Host preparation

#Disable SELinux

#Modify YUM configuration to avoid any unsupported package upgrades

echo ‘exclude=kernel* redhat-release* kubelet-* kubeadm-* kubectl-*docker-*’ >> /etc/yum.conf

#Install additional packages

yum groupinstall “Virtualization Hypervisor”
yum groupinstall “Virtualization Client”
yum groupinstall “Virtualization Platform”
yum groupinstall “Virtualization Tools”
yum install guestfish

#Verify KVM module (kvm_intel or kvm_amd)

[root@cen-s1-20 ~]# lsmod | grep kvm
kvm_amd 2177212 0
kvm 586948 1 kvm_amd
irqbypass 13503 1 kvm

 

Network environment

The ens161 & ens192 NICs will be configured in bonding on VLAN 110, ens224 & ens256 will be used by N-VDS/Overlay (on VLAN 113).

KVM Fabric Node

 

Create a new Transport Zone for Overlay Network

Only needed for fresh installation, see NSX-T: Part 4.
Skip this step if you need integrate existing vSphere Fabric.

 

Create a new Uplink Profile for KVM Hosts

Under System | Fabric | Profiles | Uplink Profiles click +ADD

ho32-1

 

I need an Uplink Profile with VLAN support (113 – my underlay network).

  • Type a name, in my case nsx-kvm-single-vtep-uplink-profile
  • Select [Default Teaming] and insert on Standby Uplinks field “uplink-2”
  • Under Transport VLAN type 113 (in my case)
  • Modify MTU value as you prefer or leave blank (the default is 1600)
  • Click ADD

ho33-1

The new Uplink Profile will be created

ho34-1

 

Create a VTEP IP Pool for KVM Hosts

Under Networking | IP Address Management | IP ADDRESS POOLS click on ADD IP ADDRESS POOLS

ho27-1

 

Type a name and click on “Set” button on the right

ho28-1

 

  • Type an IP range on IP Ranges field (I need 1 IP but for future uses I will configure 2)
  • Type the network/netmask
  • Type a default GW IP
  • Click to ADD, APPLY and SAVE

ho29-1

ho30-1

 

Uplink profile ready!

ho31-1

 

NSX-T Preparation

Under System | Fabric | Nodes | Host Transport Nodes select Manage by drop down menu and select Standalone Hosts, click +ADD

ho35-1

 

Insert KVM node informations and accept the Thumbprint clicking on NEXT button

ho36-1

ho37-1

  • Select N-VDS –> NVDS-Overlay (TZ-Overlay will be automatically selected)
  • Uplink Profile –> nsx-kvm-single-vtep-uplink-profile
  • LLDP Profile –> LLDP [Send Packet Disabled] (or select as you need)
  • IP Assignment –> Use IP Pool
  • IP Pool –> KVM_VTEP_IP_POOL
  • Physical NICs –> type ens224 for uplink-1 and ens256 for uplink-2
  • ADD

ho38-1

After some minutes KVM node will be ready

ho39-1

 

KVM Infrastructure checks

After NSX deployment 2 new network “objects” will be created on KVM node, launch ifconfig or ip a, the VTEP is configured.

hyperbus: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::f81e:aaff:fe1c:370  prefixlen 64  scopeid 0x20<link>
        ether fa:1e:aa:1c:03:70  txqueuelen 1000  (Ethernet)
        RX packets 8  bytes 656 (656.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8  bytes 656 (656.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


nsx-vtep0.0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1600
        inet 192.168.113.20  netmask 255.255.255.0  broadcast 192.168.113.255
        inet6 fe80::180c:b2ff:fe58:14c9  prefixlen 64  scopeid 0x20<link>
        ether 1a:0c:b2:58:14:c9  txqueuelen 1000  (Ethernet)
        RX packets 45  bytes 2816 (2.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 30  bytes 2220 (2.1 KiB)
        TX errors 70  dropped 0 overruns 0  carrier 0  collisions 0

 

Try to ping ESXi VTEP

[root@cen-s1-20 network-scripts]# ping 192.168.113.10
PING 192.168.113.10 (192.168.113.10) 56(84) bytes of data.
64 bytes from 192.168.113.10: icmp_seq=1 ttl=64 time=1.47 ms
64 bytes from 192.168.113.10: icmp_seq=2 ttl=64 time=1.00 ms
^C
--- 192.168.113.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.003/1.236/1.470/0.236 ms

 

To check N-VDS configuration launch following command, you can see uplink details, VLAN (tag: 113), failover mode…

[root@cen-s1-20 network-scripts]# ovs-vsctl show
59c7c4c4-40d5-4014-9bb5-853a20fbb6af
    Manager "unix:/var/run/vmware/nsx-agent/nsxagent_ovsdb.sock"
        is_connected: true
    Bridge "nsx-switch.0"
        Controller "unix:/var/run/vmware/nsx-agent/nsxagent_vswitchd.sock"
            is_connected: true
        fail_mode: standalone
        Port "nsx-switch.0"
            Interface "nsx-switch.0"
                type: internal
        Port "nsx-uplink.0"
            Interface "ens224"
            Interface "ens256"
        Port "nsx-vtep0.0"
            tag: 113
            Interface "nsx-vtep0.0"
                type: internal
    Bridge nsx-managed
        Controller "unix:/var/run/vmware/nsx-agent/nsxagent_vswitchd.sock"
            is_connected: true
        Controller "unix:/run/nsx-vdpi/vdpi.sock"
            is_connected: true
        fail_mode: secure
        Port nsx-managed
            Interface nsx-managed
                type: internal
        Port hyperbus
            Interface hyperbus
    ovs_version: "2.10.2.rhel76.12344490"

With following commands you can see bridges and ovswitch ports…

[root@cen-s1-20 network-scripts]# ovs-vsctl list-br
nsx-managed
nsx-switch.0

[root@cen-s1-20 network-scripts]# ovs-vsctl list-ports nsx-switch.0
nsx-uplink.0
nsx-vtep0.0

 

Ben Kenobi

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo di WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione /  Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione /  Modifica )

Connessione a %s...