February 21, 2010

Part 5 is the final part of the series and shows how to add an additional node to an existing Sybase cluster.

The 5 parts of this series are:

The instructions designed that non-Sybase DBA can perform these tasks. Sybase DBA will find that the steps are very similar to your standard ASE installation.

Please click here to read about how to setup and test the Sybase Cluster.

Add an Additional Node to the Sybase Cluster

Preparation

The process adding a node to the cluster can be executed when the cluster is active. The steps are very simple and non-intrusive.

Note: As a pre-requisite the steps to setup the Linux environment for the original nodes must be followed before actually adding a node to the Sybase ASE cluster. The new node should already be configured with all network adapters, all the disks should have be mounted with right permissions etc.. Since the /sybase filesystem is a NFS share no other extra steps are needed for this preparation step. Simply follow the steps in the installation description. The only new raw filesystem that will be added is the temp device for the tempdb on the new node. This will be the new raw device raw 9 and must be mounted to all nodes.

The new /etc/raw looks like this and has to be the same in all nodes (asece1,asece2,asece3):

sybase@asece3:~> cat /etc/raw
# /etc/raw
#
# sample configuration to bind raw devices
# to block devices
#
# The format of this file is:
# raw:
#
# example:
# ---------
# raw1:hdb1
#
# this means: bind /dev/raw/raw1 to /dev/hdb1
#
# ...
raw1:sdb1
raw2:sdc1
raw3:sdd1
raw4:sde1
raw5:sdf1
raw6:sdg1
raw7:sdh1
raw8:sdi1
raw9:sdj1
sybase@asece3:~>

Please make sure that the /dev/raw/raw9 is owned by the sybase user.

The new /etc/hosts looks like this:
asece1:~ # cat /etc/hosts
#
# hosts This file describes a number of hostname-to-address
# mappings for the TCP/IP subsystem. It is mostly
# used at boot time, when no name servers are running.
# On small systems, this file can be used instead of a
# "named" name server.
# Syntax:
#
# IP-Address Full-Qualified-Hostname Short-Hostname
#
127.0.0.1 localhost
# special IPv6 addresses
::1 localhost ipv6-localhost ipv6-loopback
fe00::0 ipv6-localnet
ff00::0 ipv6-mcastprefix
ff02::1 ipv6-allnodes
ff02::2 ipv6-allrouters
ff02::3 ipv6-allhosts
# Public IP Addresses
192.168.1.210 asecenfs.localhost.org asecenfs
192.168.1.211 asece1.localhost.org asece1
192.168.1.212 asece2.localhost.org asece2
192.168.1.213 asece3.localhost.org asece3
# Rpimary Private Network
192.168.159.211 asece1-ppriv.localhost.org asece1-ppriv
192.168.159.212 asece2-ppriv.localhost.org asece2-ppriv
192.168.159.213 asece3-ppriv.localhost.org asece3-ppriv
# Secondary Private Network
192.168.207.211 asece1-spriv.localhost.org asece1-spriv
192.168.207.212 asece2-spriv.localhost.org asece2-spriv
192.168.207.213 asece3-spriv.localhost.org asece3-spriv
asece1:~ #

This is pretty much all the prep work that needs to be done to add a node to an existing Sybase ASE cluster.

Installation Steps

1. Increase the number of max instances parameter if not set to include an additional instance
The cluster must be shutdown to execute this command.
sybase@asece1:~> sybcluster -Uuafadmin -P -C mycluster -F "asece1,asece2"
> connect to mycluster
mycluster> shutdown cluster
Are you sure you want to shutdown the cluster? (Y or N): [ N ] y
INFO - Shutdown of cluster mycluster has completed successfully.
mycluster>set cluster maxInst 3
The value has been changed.
Would you like to recalculate the primary and secondary network ports? (Y or N): [ N ] Y
Enter the starting port number: [ 15100 ]
Recalculated port range: 1 Primary ase1 15100 to 15114
Recalculated port range: 2 Primary ase2 15115 to 15129
Recalculated port range: 1 Secondary ase1 15146 to 15160
Recalculated port range: 2 Secondary ase2 15161 to 15175
Should these port numbers be applied to the cluster? (Y or N): [ N ] Y
The values have been changed.

2. Start the unified agent on the new node
sybase@asece3>$SYBASE_UA/bin/uafstartup.sh &
3. Deploy the cluster configuration to the new node.
sybase@asece1:~> sybcluster -Uuafadmin -P -C mycluster -F "asece1,asece2"
>deploy plugin agent "asece3"
Enter the name of the cluster: mycluster
Verifying the supplied agent specifications...
1) asece3.localhost.org 9999 2.5.0 Linux
Enter the number representing the cluster node : [ 1 ] 1
Enter the full path to the quorum disk: /dev/raw/raw2
Enter the SYBASE home directory: [ /sybase ]
Enter the environment shell script path: [ /sybase/SYBASE.sh ]
Enter the ASE home directory: [ /sybase/ASE-15_0 ]
Deploying the cluster management agent plugin...
Agent plugin deployed successfully.
>exit

4. Add the new node/instance to the cluster
Before adding an instance to the cluster, the tempdb device must exist. Create the mycluster3_tempdb device on /dev/raw/raw9 either through Sybase Central or isql. Once created, continue with the procedure.
SQL Command:
USE master
go
disk init name='mycluster3_tempdb', physname='/dev/raw/raw9', vdevno=7, size=512000, cntrltype=0, dsync=true, directio=false
go

Note: The new instance is down by default and you start it manually…

mycluster> start instance ase3

sybase@asece1:~> sybcluster -Uuafadmin -P -C mycluster -F "asece1,asece2,asece3"
> connect to mycluster
mycluster>start cluster
.
.
.
.
INFO - 02:00:00000:00062:2008/12/23 08:37:01.75 kernel Sequence table svrid=2, tblcol=0, tblindex=1, count=0.
INFO - 02:00:00000:00062:2008/12/23 08:37:01.75 kernel instance 2 eventdone.
INFO - 02:00:00000:00002:2008/12/23 08:37:01.83 server ASE's default unicode sort order is 'binary'.
INFO - 02:00:00000:00002:2008/12/23 08:37:01.83 server ASE's default sort order is:
INFO - 02:00:00000:00002:2008/12/23 08:37:01.83 server 'bin_iso_1' (ID = 50)
INFO - 02:00:00000:00002:2008/12/23 08:37:01.83 server on top of default character set:
INFO - 02:00:00000:00002:2008/12/23 08:37:01.83 server 'iso_1' (ID = 1).

mycluster>add instance ase3
Verifying the supplied agent specifications...
1) asece1.localhost.org 9999 2.5.0 Linux
2) asece2.localhost.org 9999 2.5.0 Linux
3) ce3.localhost.org 9999 2.5.0 Linux
Enter the number representing the cluster node where ase3 will reside: [ 3 ] 3
--------------------------------------------------------
Instance Id 1; ase1 uses transport tcp on port 19786.
Instance Id 2; ase2 uses transport tcp on port 19786.
Enter the interface file query port number for instance ase3: 19786
Backup Server is configured for the cluster. Do you want to configure it for this instance too? [ Y ] Y
Enter the Backup Server port number for node "mycluster": 19799
Enter the primary protocol address for ase3: [ asece3.localhost.org ] asece3-ppriv
Enter the secondary protocol address for ase3: [ asece3.localhost.org ] asece3-spriv
--------------------------------------------------------
Calculating default ports. Please wait...
Currently defined protocols specifications by Instance
Instance Id 1; Name: ase1; Primary 15100 - 15114; Secondary 15146 - 15160
Instance Id 2; Name: ase2; Primary 15115 - 15129; Secondary 15161 - 15175
Enter the primary protocol starting port for instance ase3: [ 15130 ]
Enter the secondary protocol starting port for instance ase3: [ 15176 ]
--------------------------------------------------------
Device: master Used: 70 Mb Size 180 Mb
Path: /dev/raw/raw1
Device: tapedump1 Used: 0 Mb Size 0 Mb
Path: /dev/nst0
Device: tapedump2 Used: 0 Mb Size 625 Mb
Path: /dev/nst1
Device: sysprocsdev Used: 135 Mb Size 180 Mb
Path: /dev/raw/raw4
Device: systemdbdev Used: 12 Mb Size 80 Mb
Path: /dev/raw/raw5
Device: mycluster1_tempdb Used: 900 Mb Size 1,000 Mb
Path: /dev/raw/raw7
Device: mycluster2_tempdb Used: 900 Mb Size 1,000 Mb
Path: /dev/raw/raw8
Device: sybmgmtdev Used: 180 Mb Size 180 Mb
Path: /dev/raw/raw6
Device: data Used: 2,208 Mb Size 10,000 Mb
Path: /dev/raw/raw3
Device: mycluster3_tempdb Used: 0 Mb Size 1,000 Mb
Path: /dev/raw/raw9
--------------- Local System Temporary Database ---------
The Local System Temporary Database Device contains a database for each instance in the cluster.
Enter the LST device name: mycluster3_tempdb
Note: The device can be an existing one if there are enough spaces to it....
Enter the LST database name: [ mycluster_tdb_3 ]
Enter the LST database size (MB): [ 1000 ] 900
--------------------------------------------------------
Would you like to save this configuration information in a file? [ Y ] Y
Enter the name of the file to save the cluster creation information: [ /sybase/mycluster_ase3.xml ]
--------------------------------------------------------
Add the instance now? [ Y ]
INFO - Creating the Cluster Agent plugin on node asece3.localhost.org using agent: asece3.localhost.org:9999
A cluster agent plugin at asece3.localhost.org:9999 is already managing a cluster by the name mycluster.
Should this agent plugin be reused for the cluster: [ N ] Y
INFO - The Cluster Agent Plugin on agent asece3.localhost.org:9999 will be reused.
Adding the new instance...
INFO - ase3: Creating the Local System Temporary database ase3_tempdb on ase3_tempdb of size 900M.
The addition of the instance ase3 has completed.

sybc> show cluster status
INFO - Listening for the cluster heartbeat. This may take a minute. Please wait... (mycluster::AseProbe:434)
Id Name Node State Heartbeat
-- ---- ------------------ ----- ---------
1 ase1 asece1.localhost.org Up Yes
2 ase2 asece2.localhost.org Up Yes
3 ase3 asece3.localhost.org Down No
-- ---- ------------------ ----- ---------

This is all it takes to add an additional node to an existing Sybase cluster. It is simple, straight forward and can be implemented in minutes. The only downside is that the cluster has to be down in order to deploy a new node agent. If this could be done while the cluster is up and running that would be as close to complete dynamic horizontal deployment as it gets.

During my tests I repeated this step several times to see how easy and quickly it is to recover from a failed node expansion and it could be easier than this. Simply shutdown down the agent on the new node, remove the files in

$SYBASE_UA/nodes/asece3/plugins/mycluster

and repeat the steps above. It worked every time without a hitch. Every DBA will appreciate the simplicity.

Conclusion:
I hope this 5 part step-by-step mini course provided you with all the information needed to create your very own Sybase ASE 15 CE cluster environment.

Thank you for your interest and please stay tuned for more step-by-step instructions on other technologies.