NSX-T: Part 2 – Deploy and Clusterize NSX-T Manager

Decepticon

Deploy First NSX-T Manager

 

manager1-1

Set VM name and select DC

manager2-1

Select the Compute Resource to host NSX Manager appliance

manager3-1

manager4-1

Set NSX Manager size.
Select ExtraSmall deployment if you want setup the NSX Manager Cluster in manual mode or Small form factor for using configuration wizard.
During configuration wizard is possible only select Small and up size.

Tips. In a Lab environment select ExtraSmall size and deploy additional NSX Manager in manual mode to save resources (You need 3x8GB RAM for entire cluster)

manager5-1

Select storage

manager6-1

Select the the appliance management network

manager7-1

Set following parameters

  • System Root User Password
  • CLI “admin” User Password
  • CLI “audit” User Password
  • CLI “admin” username (default: admin)
  • CLI “audit” username (default: audit)
  • Hostname
  • Rolename nsx-manager nsx-controller
  • Default IPv4 Gateway
  • Management Network IPv4 Address
  • Management Network Netmask
  • DNS Server list
  • Domain Search List
  • NTP Server List
  • Enable SSH (Optional)
  • Allow root SSH logins (Optional)

Leave blank the other fields

manager8-1-1

manager8-2-1

manager9-1

After appliance deployment, login to NSX Manager (https://”NSX Manager IP”) and accept the EULA.

manager10-1

Skip the wizard

manager11-1

NSX Manager is ready in trial mode for 60 days with full features enabled.

manager12-1

manager13-1

Clustering NSX Manager

Now we can setup a NSX Manager Cluster (in manual mode).

Repeat previous appliance deployment steps for 2 additional NSX Managers, we need a total of 4 FQDN with reverse lookup to finish the configuration.

In my env:

nsx-s1-01 192.168.10.21 – 1st Manager
nsx-s1-02 192.168.10.22 – 2nd Manager
nsx-s1-03 192.168.10.23 – 3rd Manager
nsx-s1-vip 192.168.10.24 – VIP IP is needed only for User Interface (UI) Service.

Login into 1st NSX Manager as admin user (SSH or local console) and launch:

#To retrieve certificate thumbprint
get certificate api thumbprint

#To retrieve Cluster ID
get cluster running

clu2-1

Login into 2nd NSX Manager as admin user (SSH or local console) and launch:

#To join NSX Manager to the cluster
join “IP of 1st NSX Manager” cluster-id “Cluster id” username admin password “Admin Password” thumbprint “Thumbprint of 1st Manager”

clu3-1

Wait 5-10 mins to node join, sync the configuration and stabilize the cluster.

Login into 3rd NSX Manager as admin user (SSH or local console) and launch:

#To join NSX Manager to the cluster
join “IP of 1st NSX Manager” cluster-id “Cluster id” username admin password “Admin Password” thumbprint “Thumbprint of 1st Manager”

clu4-1

Wait 5-10 mins to node join, sync the configuration and stabilize the cluster.

Now in the System | Overview page we can verify NSX Manager Cluster Status

clu5-1

Stable! and all nodes are displayed correctly!

Launch some other commands to check cluster status:

get cluster config
get cluster status
get managers
get node

Questo slideshow richiede JavaScript.

Setup VIP IP for User Interface (UI) Service

In the System | Overview page click on Edit button

clu9-1

Insert VIP IP address (in my case 192.168.10.24)

Questo slideshow richiede JavaScript.

 

Now Virtual IP 192.168.10.24 is associated with 192.168.10.22 (The 2nd NSX Manager)

clu13-1

Last step, try to login with VIP IP

clu14-1

The configuration is done!

Ben Kenobi

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo di WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione /  Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione /  Modifica )

Connessione a %s...