Mnimum vSphere version for NSX-T 2.4
vCenter 6.71Ub
ESXi 6.7U1 with patch ESXi670-201901001
Add Compute Manager (vCenter)
Login to NSX Manager, under System | Fabric | Compute Managers click on +ADD
Insert vCenter informations and credential, click ADD
Accept vCenter Thumbrint
After some seconds vCenter will be registered
Create a new Transport Zone for Overlay Network
Under System | Fabric | Transport Zones click +ADD
Insert/Select
- Transport Zone name
- NVDS name
- Leave default for Host Membership Criteria (Standard)
- Leave default for Trafic Type (Overlay)
The Host Membership Criteria in Enhanced Datapath can be used only for vSphere 6.7 Hosts and it’s based on DPDK libraries. Possibly, don’t use feature based on DPDK if you run a nested lab that use Ryzen processor. I have some problems on my lab based on AMD processor (see in the next posts) during Edge VMs deployment (that use these libraries).
The new TZ is deployed succesfully
Create a new Uplink Profile for multi VTEP setup
Under System | Profiles | Uplink Profiles clicck +ADD
I need an Uplink Profile to support multi VTEP deployment that use VLAN 113 (my underlay network).
- Type a name, in my case nsx-esxi-multi-vteps-uplink-profile (Note multi VTEP is supported only on vSphere environments).
- Select [Default Teaming] and change Teaming Policy from Failover to LBS, click on Active Uplinks field and add “uplink-2”
- Under Transport VLAN type 113 (in my case)
- Modify MTU value as you prefer or leave blank (the default is 1600)
- Click ADD
The new Uplink Profile is created.
Create a VTEP IP Pool for ESXi Hosts
Under Networking | IP Address Management | IP Address Pools click on ADD IP ADDRESS POOL button
- Type an IP range on IP Ranges field (I need 6 IPs in my case to setup a vSphere Cluster with 3 Hosts)
- Type the network/netmask
- Type a default GW IP
- Click to ADD and APPLY
VTEP IP POOL done!
Create a Transport Node Profile
Under System | Fabric | Profiles | Transport Node Profile clicck +ADD
- Type a Transport Node Profile name –> TNP-esxi
- Select TZ-Overlay and move to “Selected” box
- Click on N-VDS button to insert all NSX objects created in the previous steps
On the New Node Switch box select:
- N-VDS –> NVDS-Overlay (TZ-Overlay will be automattically selected)
- NIOC Profile –> select the default profile
- Uplink Profile –> nsx-esxi-multi-vteps-uplink-profile
- LLDP Profile –> LLDP [Send Packet Disabled] (or select as you need)
- IP Assignment –> Use IP Pool
- IP Pool –> ESXI_VTEP_IP_POOL
- Physical NICs –> type vmnic2 for uplink-1 and vmnic3 for uplink-2
- ADD
Ok, now we are ready to prepare vSphere infrastructure
NSX-T preparation
Under System | Fabric | Nodes | Host Transport Nodes select Managed By drop down menu and select right vCenter, all vSphere Clusters managed by Compute Manager will be show, select Compute-Cluster and click on CONFIGURE NSX button
Select the TNP-esxi profile just created and SAVE
After some minutes all ESXi in the vSphere Cluster will be ready
vSphere Infrastructure checks
Selecting an ESXi from vCenter we can verify N-VDS deployment
vmnic2 and vmnic3 are used by Transport Node Profile (TNP-esxi) setup
A new custom TCP/IP stack is created on ESXi (VXLAN…but the Overlay use Geneve)
From vSphere Client the VTEP interfaces are not displayed…but from esxcli you can verify the configuration.