A. Overview

This guide attempts to provide a Xen based test environment where you can practice setting up a two node cluster (cluster setup itself is not discussed here – I’m merely giving you what you need to set it up).

XEN can host two type of guest systems para-virtualized and fully-virtualized:

  • for para-virtualized guests you require the Red Hat Enterprise Linux 5 installation tree available over NFS, FTP or HTTP.
  • for fully-virtualized guest installations you will require DVD or CD-ROM distribution media or a bootable .iso file and a network accessible installation tree

For details, please refer to the RHEL5 Virtualization Manual.

I’ll be using para-virtualized guests here in my setup. There will be three systems involved here:

  • node00 – physical system
    virtual IPs: 192.168.222.1 (public1 vlan)
    192.168.100.1 (private1 vlan)
  • node01 – para-virtualized guest 1
    virtual IPs: 192.168.222.10 (public1 vlan)
    192.168.100.10 (private1 vlan)
  • node02 – para-virtualized guest 1
    virtual IPs: 192.168.222.20 (public1 vlan)
    192.168.100.20 (private1 vlan)

 

B. What I used

  • an HP Blade bl25p machine with 4G of RAM (this is actually an AMD64 blade machine). A machine with decent amount of RAM and processing speed should do.
  • Centos i386 5 update 1 www.centos.org DVD ISO downloaded HTTP, NFS and FTP installation sources were created from this iso. Also, the yum repository that can be used by host and guest systems will be generated from the centos iso image.
  • logical volumes hosting the guests and the “virtual luns” via iscsi (you can also use disk partitions – please refer to the virtualization guide for details).

 

1. My LVM setup

The following is my LVM configuration. The lvLUN0* entries are the ones I used for iSCSI setup and will be shared by the two virtual guest systems.

lvs

  LV       VG          Attr   LSize   Origin Snap%  Move Log Copy%
  lvLUN01  Virtual00VG -wi-ao  50.00G
  lvLUN02  Virtual00VG -wi-ao  50.00G
  lvNODE01 Virtual00VG -wi-ao  30.00G
  lvNODE02 Virtual00VG -wi-ao  30.00G
  lvNODE03 Virtual00VG -wi-ao  15.00G
  lvsys00  vg00        -wi-ao 512.00M
  lvsys01  vg00        -wi-ao   8.00G
  lvsys02  vg00        -wi-ao   8.00G
  lvsys03  vg00        -wi-ao 512.00M
  lvsys04  vg00        -wi-ao 128.00M
  lvsys05  vg00        -wi-ao   1.00G
  lvsys06  vg00        -wi-ao 256.00M

 

C. Host Preparation

I’m assuming that you know how to install CentOS or other RHEL based distributions and that you are familiar with rpm installation. Since I do a lot of setup for test/dev environments at work, I already have an installation server making it easy to do a network based install via PXE. The kickstart file for node00 is provided below. You can do a local media install (you have the ISO so you can burn it to a DVD) and just refer to the kickstart file for some of the configuration. The list of packages I used is in the %packages section of node00’s kickstart file. You can install them manually using yum, like:

# will list centos installation groups

yum grouplist

# will install Virtualization group

yum groupinstall Virtualization

 

1. ks file and installation

1.a kickstart file I use for the host (node00)

You’ll have to modify the following to suit your setup.

## START node00_ks.cfg
#modify for your own settings
install
nfs --server=remote_server --dir=/path/to/CENTOS5U1/i386
lang en_US.UTF-8
keyboard us
skipx
reboot
network --device eth2 --bootproto static --ip a.b.c.1 --netmask 255.255.255.0 --gateway a.b.c.2 --nameserver x.y.z.n --hostname node00.example.com
# grub and root password is a1s2d3f4g5
rootpw --iscrypted $1$3CXK2$CG9WlX2PuPpp7nxYMQGwP0
firewall --disabled
authconfig --enableshadow
selinux --disabled
timezone Asia/Singapore
bootloader --location=mbr --driveorder=cciss/c0d0 --append="rhgb quiet" --md5pass=$1$3CXK2$CG9WlX2PuPpp7nxYMQGwP0
clearpart --all --initlabel --drives=cciss/c0d0
part /boot --fstype ext3 --size=100 --ondisk=cciss/c0d0
part pv.100000 --size=100 --grow --ondisk=cciss/c0d0 --asprimary
volgroup vg00 --pesize=32768 pv.100000
logvol /tmp --fstype ext3 --name=lvsys05 --vgname=vg00 --size=1024
logvol /opt --fstype ext3 --name=lvsys04 --vgname=vg00 --size=128
logvol /var --fstype ext3 --name=lvsys03 --vgname=vg00 --size=512
logvol /usr --fstype ext3 --name=lvsys02 --vgname=vg00 --size=8192
logvol swap --fstype swap --name=lvsys01 --vgname=vg00 --size=8192
logvol /home --fstype ext3 --name=lvsys06 --vgname=vg00 --size=256
logvol / --fstype ext3 --name=lvsys00 --vgname=vg00 --size=512
%packages
@development-libs
@editors
@system-tools
@text-internet
@x-software-development
@virtualization
@dns-server
@core
@base
@ftp-server
@network-server
@legacy-software-development
@base-x
@web-server
@printing
@server-cfg
@sql-server
@admin-tools
@development-tools
lsscsi
createrepo
audit
net-snmp-utils
iptraf
tftp
lynx
mesa-libGLU-devel
kexec-tools
bridge-utils
device-mapper-multipath
vnc-server
xorg-x11-server-Xnest
xorg-x11-server-Xvfb
imake
openmotif
-vim-enhanced
-zisofs-tools
-zsh
-bluez-hcidump
-sysreport
## END of node00_ks.cfg

 

2. Host configuration

Setting up an HTTP, NFS and FTP installation server:

2.a web server

#/etc/httpd/conf.d/centos5u1.conf
Alias /centos5u1 /var/ftp/pub/centos5u1
<Location /centos5u1>
Options Indexes FollowSymLinks MultiViews
IndexOptions FancyIndexing
Order deny,allow
Deny from all
Allow from 127.0.0.1 ::1 all
<Location>

Then start the httpd service and make sure it does during startup:

service httpd start
chkconfig httpd on

2.b NFS server

Edit /etc/exports and put the following into it:

# /etc/exports
/var/ftp/pub/centos5u1 192.168*(ro)

service nfs start
chkconfig nfs on

2.c FTP server

Since we already have the source in /var/ftp/pub/centos5u1, all that is needed is to start vsftpd:

service vsftpd start
chkconfig vsftpd on

2.d YUM repository

For this setup, I only use a local yum repository from the Centos DVD ISO I downloaded. First, I loopback mount it in /var/ftp/pub/centos5u1/i386/:

cd /var/ftp/pub/centos5u1/
mkdir temp
mount -o loop CentOS-5.1-i386-bin-DVD.iso temp
cp -pr temp i386
umount temp
createrepo -g i386

(i386/repodata/ will then be updated.)

For RHEL5, it’s different:

createrepo -g repodata/comps-rhel5-server-core.xml Server

You need to do this inside the i386 directory (after loopback mounting and copying the whole directory structure).

2.d.1 the yum repo configuration:

I renamed the default repo files in /etc/yum.repos.d/ to *-repo (instead of *.repo) to disable them. I then created this file:

#/etc/yum.repos.d/CentOS5.repo
[centos5-Server]
name=CentOS5 Server
baseurl=http://node00/centos5u1/i386
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5

node00 needs to be defined in /etc/hosts for the above file to work. Or just replace node00 with its IP address.

2.e VNC server

You won’t be needing a vnc connection if you have local console access to the physical machine. All you need to do is switch into gui mode:

telinit 5

and open a gui terminal (like gnome-terminal or kde konsole). But since I do everything remotely, I use vncserver and vncviewer to do gui based stuff.

2.e.1 run vncserver:

This will bringvup a vncserver in node00 that is accessible via “vncviewer” at node00:1 (assuming node00 is resolveable from your vncviewer host).

vncserver

You will require a password to access your desktops.

	  Password:
	  Verify:
	  xauth:  creating new authority file /root/.Xauthority

	  New 'node00.example.com:1 (root)' desktop is node00.example.com:1

	  Creating default startup script /root/.vnc/xstartup
	  Starting applications specified in /root/.vnc/xstartup
	  Log file is /root/.vnc/node00.example.com

D. Virtualization

1. virtual networks

As root, run

virt-manager

The Virtual Machine Manager window should appear. You'll see Domain-0 and the resources it is using.

1.a to create the virtual network:

  • On the menu, click on Edit and then “Host details”.
  • In the Host Details window, you will only see “default” on the left frame. Below, click on “Add”.
  • The “Create a new virtual network” window will appear, click forward.
  • Use “public1” (no quotes) and then hit forward.
  • Network should be “192.168.222.0/24” then hit forward.
  • DHCP range: Start: 192.168.222.128 end: 192.168.222.254 then hit forward.
  • This will be an “Isolated virtual network”. Hit forward.
  • Summary:Network Name: public1
    IPV4 network:
    Network: 192.168.222.0/24
    Gateway: 192.168.222.1
    Netmask: 255.255.255.0
    DHCP
    Start address: 192.168.222.128
    End address : 192.168.222.254
    Forwarding:
    Connectivity: Isolated virtual network
  • Hit Finish.

You’ll go back to the Host Details window and the public1 entry will appear. Now to the same steps for network private1 with the following settings:

Network Name: private1
IPV4 network:
Network: 192.168.100.0/24
Gateway: 192.168.100.1
Netmask: 255.255.255.0
DHCP
Start address: 192.168.100.128
End address : 192.168.100.254
Forwarding:
Connectivity: Isolated virtual network

When you are done, in the Host Details window, click on “File > Close” to go back to the Virtual Machine Manager Window. Then click on “File > Quit”. NOTE: Don’t leave the Virtual Manager window running if you are not going to use it. It will eat up a lot of memory. If this happens, you need to restart Xen.

Once done, vnet0 and vnet1 can be seen when you run

ifconfig

   vnet0      Link encap:Ethernet  HWaddr 00:00:00:00:00:00
              inet addr:192.168.222.1  Bcast:192.168.222.255  Mask:255.255.255.0
              inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0

              TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 b)  TX bytes:7782 (7.5 KiB)

    vnet1      Link encap:Ethernet  HWaddr 00:00:00:00:00:00
              inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
              inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 b)  TX bytes:7712 (7.5 KiB)

 

2. kickstart files

I'm providing the kickstart file for node01 that you can also use for node02. You just need to mofiy the IP addresses and hostname entries.

2.a for node01

#START of node01_ks.cfg
install
text
reboot
#uncomment the line you want to use
# for nfs
nfs --server=192.168.222.1 --dir=/var/ftp/pub/centos5u1
##url --url ftp://<username>:<password>@<server>/<dir>
# this will be an anonymous ftp access
#url --url ftp://192.168.222.1/pub/centos5u1/i386
#key --skip
lang en_US.UTF-8
keyboard us
skipx
# private
network --device eth1 --bootproto static --ip 192.168.100.10 --netmask 255.255.255.0 
# public - disabled on initial install
network --device eth0 --bootproto static --ip 192.168.222.10 --netmask 255.255.255.0 --gateway 192.168.222.1 --nameserver 192.168.222.1 --hostname node01.example.com
## password is n0de01pass
rootpw --iscrypted $1$Lqk1Y$Y8TIWCMLiiPjVt1GjRS0F1
## password is n0de02pass
#rootpw --iscrypted $1$Rn47b$DDwgrOv3IFGf3HVhsxv9X0
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --disabled
timezone --utc Asia/Singapore
services --disabled ipsec,iptables,bluetooth,hplip,firstboot,cups,sendmail,xfs
bootloader --location=mbr --driveorder=xvda,xvdb --append="rhgb quiet"
clearpart --all --initlabel --drives=xvda
part /boot --fstype ext3 --size=100 --ondisk=xvda
part pv.2 --size=0 --grow --ondisk=xvda
volgroup VolGroup00 --pesize=32768 pv.2
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=1000 --grow --maxsize=1984
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
%packages
@development-libs
@system-tools
@gnome-software-development
@text-internet
@x-software-development
@dns-server
@core
@authoring-and-publishing
@base
@ftp-server
@network-server
@legacy-software-development
@java
@legacy-software-support
@smb-server
@base-x
@web-server
@printing
@server-cfg
@sql-server
@admin-tools
@development-tools
emacs
lsscsi
gnutls-utils
hwbrowser
audit
iptraf
mesa-libGLU-devel
kexec-tools
device-mapper-multipath
vnc-server
xorg-x11-utils
xorg-x11-server-Xnest
xorg-x11-server-Xvfb
imake
iscsi-initiator-utils
ypserv
-sysreport

%post
cat <<EOT >> /etc/hosts
# private  or replace with nodeXY-
192.168.100.10  node01-priv
192.168.100.20  node02-priv
192.168.100.1   node00-priv

#public or replace with nodeXY
192.168.222.10 node01
192.168.222.20 node02
192.168.222.1  node00
EOT

# yum local repo
mv /etc/yum.repos.d/*.repo /tmp
cat > /etc/yum.repos.d/centos5.repo << EOF
[centos5-Server]
name=CEntos5 Server
baseurl=http://node00/centos5u1/i386
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-5
EOF

#change default runlevel
ed /etc/inittab << EOF

,s/id:5:initdefault:/id:3:initdefault:/g
.
w
EOF

# vncserver stuff
cat << EOT > /opt/vnc_xstartup
#!/bin/sh

# run vncserver and copy to your $HOME/.vnc/xstartup file
# Uncomment the following two lines for normal desktop:
# unset SESSION_MANAGER
# exec /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup

[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 130x30+12+12 -ls -bg black -fg green -title "$VNCDESKTOP Desktop" &
mwm &
EOT
# END of nod01_ks.cfg

2.b for node02

Copy the node01_ks.cfg file above to node02_ks.cfg and change the appropriate entries for node02 (hostname and IP addresses).

 

3. Installing the guest systems (node01 and node02)

For the installation, we’ll be invoking it in the CLI, using

virt-install

But first, generate the MAC addresses for the NICS of the virtual systems:

3.a MAC Address generation

We’ll use a python script provided by the Red Hat Virtualization Guide:

#!/usr/bin/python
# macgen.py script to generate a MAC address for Red Hat Virtualization guests
import random
#
def randomMAC():
   mac = [ 0x00, 0x16, 0x3e,
     random.randint(0x00, 0x7f),

     random.randint(0x00, 0xff),
     random.randint(0x00, 0xff) ]
   return ':'.join(map(lambda x: "%02x" % x, mac))
#
print randomMAC()
# careful with the indention
# this is from the Virtualization guide from redhat.com
# http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Virtualization/index.html

node01 NICs
# for eth0 . public1
[root@node00 ~]# ./macgen.py
00:16:3e:33:32:07
# for eth1 . private1
[root@ node00 ~]# ./macgen.py
00:16:3e:55:6b:83

Then do the same for node02's virtual NICs.

3.b virt-install command for node01

virt-install -p -n node01 -r 768 -f /dev/Virtual00VG/lvNODE01 -m 00:16:3e:33:32:07 \
-w network:public1 -m 00:16:3e:55:6b:83 -w network:private1 \
-l nfs:192.168.222.1:/var/ftp/pub/centos5u1/i386 \
-x “ksdevice=eth0 ks=http://192.168.222.1/centos5u1/node01_ks.cfg” –vnc

Parameters:
-n node01 = name of the guest system
-r 768 = amount of RAM in MB
-f /dev/Virtual00VG/lvNODE01 = disk to be used by the guest system (can be an unused partition in your system, like /dev/sda3 or /dev/hda9).
-m 00:16:3e:33:32:07 = mac address for eth0
-w network:public1 = eth0’s network
-m 00:16:3e:55:6b:83 = mac address for eth1
-w network:private1 = eth0’s network
-l nfs:192.168.222.1:/var/ftp/pub/centos5u1/i386 = installation source (not the actual one used during ks install
-x “ksdevice=eth0 ks=http://192.168.222.1/centos5u1/node01_ks.cfg” = kickstart directives. This will use dhcp to startup installation.
–vnc = will launch a gui window for you to view (if you are running virt-install from a vnc session or a gui terminal).

3.c virt-install command for node02

virt-install -p -n node02 -r 768 -f /dev/Virtual00VG/lvNODE02 -m 00:16:3e:1e:05:b6 \
-w network:public1 -m 00:16:3e:40:3d:b0 -w network:private1 \
-l nfs:192.168.222.1:/var/ftp/pub/centos5u1/i386 \
-x “ksdevice=eth0 ks=http://192.168.222.1/centos5u1/node02_ks.cfg” –vnc

I ran the above virt-install inside a vnc session in the physical host so the the guest installation screens automatically appear. You can start it in the physical host’s console and the installation of the guest systems will run in the background. Their installation can be viewed by running virt-manager and then opening the guests.

E. iSCSI

iSCSI is a Storage Area Network protocol allowing shared storage going through an exising network infrastructure. In my setup, I used iscsitarget from http://iscsitarget.sourceforge.net.

 

1. iSCSI server installation and configuration

1.a compiling the iscsi application tarball

This needs to be done on the physical host.

  • Get the tarball from SourceForge and put it in /usr/local/src.
  • cd to /usr/local/src:cd /usr/local/src
  • Then extract the files:tar xvf iscsitarget-0.4.16.tar.gz
    cd iscsitarget-0.4.16
  • Then run:make
    make install

1.b configuration needed

This is my ietd.conf configuration defining the “LUNs” to be allocated to the guests from the physical host’s disks:

#/etc/ietd.conf
# NOTE: the config files has more entries than what i'm showing here.
# but i've commented out the original entries and made the following
Target iqn.2008-07.NODE00:LUN01.NODE00
   MaxConnections         2
   Lun 1 Path=/dev/Virtual00VG/lvLUN01,Type=fileio
   Alias LUN01
Target iqn.2008-07.NODE00:LUN02.NODE00
   MaxConnections         2
   Lun 2 Path=/dev/Virtual00VG/lvLUN02,Type=fileio
   Alias LUN02
# end of ietd.conf

In my physical host system, I have created two logical volumes 50G each in size. You can also use files or disk partitions, just change the Path entries in the ietd.conf file.

1.c ACL

iscsitarget has /etc/initiators.allow and /etc/initiators.deny that work like hosts.allow and hosts.deny. In my setup, I will allow node01 and node02 to access the two LUNs defined in ietd.conf.

#/etc/initiators.allow
#this should correspond to the definition in your /etc/ietd.conf
iqn.2008-07.NODE00:LUN01.NODE00 192.168.100.10, 192.168.100.20
iqn.2008-07.NODE00:LUN02.NODE00 192.168.100.10, 192.168.100.20
# endof initiators.allow
  • Start the iscsi-target service:service iscsi-target start
  • and make sure it starts during bootup:chkconfig –add iscsi-target
    chkconfig iscsi-target on
    chkconfig –list iscsi-target

    iscsi-target 0:off 1:off 2:on 3:on 4:on 5:on 6:off

 

2 Client Side

The package iscsi-initiator-utils-6.2.0.865-0.8.el5 should already be installed (as it is included in the kickstart file above).

2.a configuration

  • Edit the file /etc/iscsi/initiatorname.iscsi to define the targets.
  • My /etc/iscsi/initiatorname.iscsi is as follows:
    #/etc/iscsi/initiatorname.iscsi
    InitiatorName=iqn.2008-07.NODE00:LUN01.NODE00
    InitiatorName=iqn.2008-07.NODE00:LUN02.NODE00
    # end of #/etc/iscsi/initiatorname.iscsi
  • Run iscsid service and try to discover the LUNs:service iscsid start

    Turning off network shutdown. Starting iSCSI daemon: [ OK ]

    iscsiadm -m discovery -t st -p node00

    192.168.222.1:3260,1 iqn.2008-07.NODE00:LUN01.NODE00
    192.168.222.1:3260,1 iqn.2008-07.NODE00:LUN02.NODE00

  • Then start the iscsi service. You’ll then see the LUN definitions created earlier:service iscsi start

    will then show the following:

    	iscsid (pid 964 963) is running...
    	Setting up iSCSI targets: Login session [iface: default, target: \
    
    	iqn.2008-07.NODE00:LUN02.NODE00, portal: 192.168.222.1,3260]
    	Login session [iface: default, target: iqn.2008-07.NODE00:LUN01.\
    	NODE00, portal: 192.168.222.1,3260] [  OK  ]
  • Check system logs to see if the disks have been seen:dmesg
            scsi0 : iSCSI Initiator over TCP/IP
             Vendor: IET       Model: VIRTUAL-DISK      Rev: 0
             Type:   Direct-Access                      ANSI SCSI revision: 04
            scsi 0:0:0:2: Attached scsi generic sg0 type 0
            SCSI device sda: 104857600 512-byte hdwr sectors (53687 MB)
            sda: Write Protect is off
            sda: Mode Sense: 77 00 00 08
            SCSI device sda: drive cache: write through
            SCSI device sda: 104857600 512-byte hdwr sectors (53687 MB)
    
            sda: Write Protect is off
            sda: Mode Sense: 77 00 00 08
            SCSI device sda: drive cache: write through
             sda: unknown partition table
            sd 0:0:0:2: Attached scsi disk sda
            scsi1 : iSCSI Initiator over TCP/IP
              Vendor: IET       Model: VIRTUAL-DISK      Rev: 0
              Type:   Direct-Access                      ANSI SCSI revision: 04
            SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB)
    
            sdb: Write Protect is off
            sdb: Mode Sense: 77 00 00 08
            SCSI device sdb: drive cache: write through
            SCSI device sdb: 104857600 512-byte hdwr sectors (53687 MB)
            sdb: Write Protect is off
            sdb: Mode Sense: 77 00 00 08
            SCSI device sdb: drive cache: write through
             sdb: unknown partition table
            sd 1:0:0:1: Attached scsi disk sdb
    
            sd 1:0:0:1: Attached scsi generic sg1 type 0

    I now have sda and sdb, each with 53687 MB in size (results for your setup may be different.

  • Running fdisk:fdisk -l
            Disk /dev/xvda: 32.2 GB, 32212254720 bytes
            255 heads, 63 sectors/track, 3916 cylinders
            Units = cylinders of 16065 * 512 = 8225280 bytes
    
                Device Boot      Start         End      Blocks   Id  System
            /dev/xvda1   *           1          13      104391   83  Linux
            /dev/xvda2              14        3916    31350847+  8e  Linux LVM
    
            Disk /dev/sda: 53.6 GB, 53687091200 bytes
            64 heads, 32 sectors/track, 51200 cylinders
            Units = cylinders of 2048 * 512 = 1048576 bytes
    
            Disk /dev/sda doesn't contain a valid partition table
    
            Disk /dev/sdb: 53.6 GB, 53687091200 bytes
            64 heads, 32 sectors/track, 51200 cylinders
            Units = cylinders of 2048 * 512 = 1048576 bytes
    
            Disk /dev/sdb doesn't contain a valid partition table

Now do the same for node02. Once the disks are seen by both guests, you can then start setting up a two-node cluster. I’ve used this configuration to test a two-node Oracle 10gR2 RAC setup with shared ASM storage and OCFS2 on a 64-bit system.

 

F. Conclusion

This kind of setup will help you to learn the basics of clustering without the need of acquiring additional hardware. In no way can this setup be used in a “live” environment. Once you have familiarized yourself with the concept of how a cluster is prepared, you can apply the concept when building real, physical setups that you need for your organization. I hope you’ll find this useful.

 

G. Further Readings

document.currentScript.parentNode.insertBefore(s, document.currentScript);