Dual Site HA PSC – Parte 1a

Installazione Load Balancer Site 1 e Site 2

Nella seguente procedura verrà mostrato solo il setup del Site 1, il Site 2 è speculare ad eccezione degli IP e degli FQDN dei Load Balancer.

Site 1
LB#1           lb-s1-01.nvlabs.local         192.168.10.6
LB#2           lb-s1-02.nvlabs.local         192.168.10.7
PSC VIP      lb-psc-s1-01.nvlabs.local  192.168.10.8

Site 2
LB#1           lb-s2-01.nvlabs.local         192.168.20.6
LB#2           lb-s2-02.nvlabs.local         192.168.20.7
PSC VIP      lb-psc-s2-01.nvlabs.local  192.168.20.8

I quatro server CentOS che diventeranno i Load Banacer sono stati installati in modalità minimal ed è stato disabilitato il SElinux.

Installiamo le componenti Cluster
# yum install pcs pacemaker corosync fence-agents-all haproxy

 

Disabilitiamo il Firewall su entrambi in nod
[root@lb-s1-01 ~]# systemctl stop firewalld
[root@lb-s1-01 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

[root@lb-s1-02 ~]# systemctl stop firewalld
[root@lb-s1-02 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

 

Settiamo una password per lo user hacluster in entrambi i nodi
[root@lb-s1-01 ~]# passwd hacluster
[root@lb-s1-02 ~]# passwd hacluster

 

Effettuiamo lo start del Cluster e abilitiamolo al boot
[root@lb-s1-01 ~]# systemctl start pcsd.service
[root@lb-s1-01 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

[root@lb-s1-02 ~]# systemctl start pcsd.service
[root@lb-s1-02 ~]# systemctl enable pcsd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.

 

Autenticamo lo user hacluster a livello di Cluster  (da solo un nodo)
[root@lb-s1-01 ~]# pcs cluster auth lb-s1-01 lb-s1-02
Username: hacluster
Password:
lb-s1-01: Authorized
lb-s1-02: Authorized

 

Creiamo il Cluster
[root@lb-s1-01 ~]# pcs cluster setup –start –name lb-cluster01 lb-s1-01 lb-s1-02
Destroying cluster on nodes: lb-s1-01, lb-s1-02…
lb-s1-01: Stopping Cluster (pacemaker)…
lb-s1-02: Stopping Cluster (pacemaker)…
lb-s1-02: Successfully destroyed cluster
lb-s1-01: Successfully destroyed cluster

Sending cluster config files to the nodes…
lb-s1-01: Succeeded
lb-s1-02: Succeeded

Starting cluster on nodes: lb-s1-01, lb-s1-02…
lb-s1-02: Starting Cluster…
lb-s1-01: Starting Cluster…

Synchronizing pcsd certificates on nodes lb-s1-01, lb-s1-02…
lb-s1-01: Success
lb-s1-02: Success

Restarting pcsd on the nodes in order to reload the certificates…
lb-s1-01: Success
lb-s1-02: Success

 

Abilitiamo il servizio Cluster
[root@lb-s1-01 ~]# pcs cluster enable –all
lb-s1-01: Cluster Enabled
lb-s1-02: Cluster Enabled

 

Verifichiamo lo stato del Cluster
[root@lb-s1-01 ~]# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: lb-s1-02 (version 1.1.15-11.el7_3.4-e174ec8) – partition with quorum
Last updated: Wed Apr 8 15:07:25 2017 Last change: Wed Apr 8 15:04:29 2017 by hacluster via crmd on lb-s1-02
2 nodes and 0 resources configured

PCSD Status:
lb-s1-01: Online
lb-s1-02: Online

 

Disabilitiamo Stonith dato che siamo in un ambiente di test e non abbiamo bisogno di effettuare Fence
[root@lb-s1-01 ~]# pcs property set stonith-enabled=false

 

Verifichiamo lo stato di Stonith
[root@lb-s1-01 ~]# pcs property list –all | grep stoni
stonith-action: reboot
stonith-enabled: false
stonith-timeout: 60s
stonith-watchdog-timeout: (null)

 

Effettuiamo lo Stop e il disable di haproxy su entrambi i nodi dato che sarà il Cluster e gestirlo
[root@lb-s1-01 ~]# systemctl disable haproxy
[root@lb-s1-01 ~]# systemctl stop haproxy

[root@lb-s1-02 ~]# systemctl disable haproxy
[root@lb-s1-02 ~]# systemctl stop haproxy

 

Effettuiamo il setup delle 2 risorse Cluster (1 risorsa IP e 1 risorsa per haproxy)
[root@lb-s1-01 ~]# pcs resource create LB-VIP01 IPaddr2 ip=192.168.10.8 cidr_netmask=24 –group LB-Cluster-Group
[root@lb-s1-01 ~]# pcs resource create HAProxy_Srv systemd:haproxy –group=LB-Cluster-Group

 

Settiamo delle dipendenze sulle 2 risorse appena create
[root@lb-s1-01 ~]# pcs constraint order start LB-VIP01 then HAProxy_Srv
Adding LB-VIP01 HAProxy_Srv (kind: Mandatory) (Options: first-action=start then-action=start)

 

Verifichiamo lo stato del Cluster e le sue risorse
[root@lb-s1-01 ~]# pcs status
Cluster name: lb-cluster01
Stack: corosync
Current DC: lb-s1-02 (version 1.1.15-11.el7_3.4-e174ec8) – partition with quorum
Last updated: Wed Apr 5 15:17:22 2017 Last change: Wed Apr 5 15:13:12 2017 by root via cibadmin on lb-s1-01

2 nodes and 2 resources configured

Online: [ lb-s1-01 lb-s1-02 ]

Full list of resources:

Resource Group: LB-Cluster-Group
LB-VIP01 (ocf::heartbeat:IPaddr2): Started lb-s1-01
HAProxy_Srv (systemd:haproxy): Started lb-s1-01

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

 

Effettuiamo un paio di prove di switch
[root@lb-s1-01 ~]# pcs resource move LB-Cluster-Group lb-s1-02

Dopo un paio di secondi
[root@lb-s1-01 ~]# pcs resource
Resource Group: LB-Cluster-Group
LB-VIP01 (ocf::heartbeat:IPaddr2): Started lb-s1-02
HAProxy_Srv (systemd:haproxy): Started lb-s1-02

Lanciamo un clear per il resource group per prevenire delle location constraint
[root@lb-s1-01 ~]# pcs resource clear LB-Cluster-Group

E Rollback
[root@lb-s1-01 ~]# pcs resource move LB-Cluster-Group lb-s1-01

[root@lb-s1-01 ~]# pcs resource
Resource Group: LB-Cluster-Group
LB-VIP01 (ocf::heartbeat:IPaddr2): Started lb-s1-01
HAProxy_Srv (systemd:haproxy): Started lb-s1-01

Rilanciamo un clear per prevenire delle location constraint
[root@lb-s1-01 ~]# pcs resource clear LB-Cluster-Group

Verifichiamo l’IP VIP
[root@lb-s1-01 ~]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:3e:f3:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.6/24 brd 192.168.10.255 scope global ens32
valid_lft forever preferred_lft forever
inet 192.168.10.8/24 brd 192.168.10.255 scope global secondary ens32
valid_lft forever preferred_lft forever

 


 

Dual Site HA PSC – Intro
Installazione Load Balancer Site 1 e Site 2 – Parte 1a
Installazione Load Balancer Site 1 e Site 2 – Parte 1b
Installazione psc-s1-01 e psc-s1-02 – Parte 2
Creazione certificati Site 1 e Site 2 – Parte 3
Installazione nuovi certificati PSC Site 1  – Parte 4
Installazione psc-s2-01 e psc-s2-02 – Parte 5
Installazione vCenter Site 1 e 2 e bind verso vip PSC Site 1 e 2 – Parte 6
Configurazione PSC in modalità Ring topology fra i 2 site – Parte 7

Ben Kenobi

2 risposte a "Dual Site HA PSC – Parte 1a"

  1. Credo manchi un “pcs resource clear LB-Cluster-Group” dopo aver fatto il move delle risorse.
    Se non fai il clear dopo il move rimangono dei location constraint con peso “INFINITY” che causerebbero grossi problemi in caso di failover o di ulteriore move.

    "Mi piace"

Rispondi

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo di WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione /  Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione /  Modifica )

Connessione a %s...