Displaying cluster info

  • scinstall -pv - Display release and version numbers
  • scrgadm -p - Display resource group info
  • scstat - Display cluster status
  • scconf -p - Display cluster configuration
  • sccheck - Check global mount points (nothing is returned if everything is OK)

Stopping and starting a cluster

  • scshutdown -g0 -y - shutdown the entire cluster
  • scswitch -S -h anode - shutdown one node on a cluster (scswitch command switches all resources on anode to other nodes in the cluster depending on the cluster configuration
  • boot -x - boot up node in non cluster mode, i.e. it won't join the cluster

Controlling global devices and cluster file systems

Update the global device name space if a new device is added (this can also be done with boot -r)

  • scgdevs

To register a VXVM disk group as a disk device group (execute this on the primary node)

  • scsetup → 3 → 1


  • scconf -a -D type=vxvm,name=dg1,nodelist=node1:node2

To register a VXVM volume changes e.g. adding volumes or removing volumes (on primary)

  • scsetup → 3 → 2


  • scconf -c -D name=dg1,sync

Unregistering and removing a disk device group from cluster control

  • scsetup → 3 → 3


  • scconf -r -D name=dg1

Changing the properties of a disk group, e.g. failback options

  • scsetup → 3 → 6


  • scswitch -z -D disk-device-group -h anode

Creating a global filesystem

  1. newfs /dev/vx/rdsk/dg1/avol
  2. mkdir -p /global/device-group/mountpoint (/global is a standard used to indicate it's a global filesystem)
  3. mount using the global option and ufs logging (unless SDS metatrans is in use)
  4. update /etc/vfstab including global and logging options

N.B. Do not nest mount points with global filesystems

  • scsetup → 3 → 5 - Remove a node from a VXVM group

Disabling a rsource in a resource group

  • scswitch -n -j resource-name - to disable the resource
  • scswitch -z -g resource-group -h nodename - will bring the resource online but with the resource disabled

Cluster quorum control

  • scconf -pv | grep -i quorum - check quorum status
  • scsetup → 1 - add or rmove quorum devices (you need to know the device ID, DID, of the disk for most quorum processes. This can be found using scdidadm -L)

Removing the last quorum device:

  • scconf -c -q installmode (put cluster into install mode)
  • scconf -r -q globaldev=device (where device is teh DID)

Put quorum device into maintenance state:

  • scconf -c -q globaldev=device maintstate

Adminstering cluster interconnects

  • scstat -W - check interconnect status
  • scsetup → 3 - to edit/remove cluster interconnects

Administering cluster public networks

Sun Cluster 3.0 uses NAFO (Network Address FailOver) (Sun Cluster 3.1 uses IPMP)

Creating a NAFO group

  1. One (and only one) of the NAFO group should have a /etc/hostname.adapter entry
  2. This should be mapped to an entry in /etc/hosts
  3. pnmset -c nafogroup -o create adapter1 adapter2
  4. pnmset -l (to check)

Deleting a NAFO group

  1. ensure no resource are using the group - scrgadm -pvv
  2. if they are, switch resource group - scswitch -z -gresource-group -h node
  3. pnmset -c nafogroup -o delete
  4. pnmstat -l

Adding or removing an adapter

Use the -o add or -o remove option to pnmset command

Switching over active adapter

  • pnmset -c nafogroup -o switch adapter (adapter becomes the active adapter)

Tuning NAFO

/etc/cluster/pnmparams contains the tunable values for NAFO. These are:

  • inactive_time - number of seconds between probes
  • ping_timeout - timeout value for pings
  • repeat_test - number of times to do ping test before failing adapter
  • slow_network - number of seconds between pings
  • warmup_time - number of seconds to wai after failover before fault monitoring resumes

Examples of managing resources

Adding a new application resouce:
 scrgadm -a -t SUNW.gds -g rg-name -j resource-name \
 -x Start_command="/path/to/start/command/stopcommand" \
 -x Stop_command="/path/to/stop/command/stopcommand" \
 -x Probe_command="/path/to/probe/command/probecommand" \
 -x stop_signal=15 -x failover_enabled=true -y Start_timeout=600 -y Stop_timeout=300 \
 -y Port_list="23/tcp" -y Probe_timeout=180
Changing an existing resource
  • scrgadm -c -g resource_name -x failover_enabled=false
Recreating a logical hostname
  • scswitch -n -j resource-name; scrgadm -r -j resource-name
  • scrgadm -a -L -j resource-name -g resource-group -l hostname -n naforgroupid
  • scswitch -Z -g reource-group

Setting up a storage resource (non-shared)

  1. Use scsetup to register te volume group
  2. Add details to /etc/vfstab
  3. Create resource group: scrgdam -a -g resource-group -h nodename
  4. Register resource (if not already done) - scrgadm -a -t SUNW.HAStoragePlus
  5. Create resource: scrgadm -a -g resource-group -j resource-name -t SUNWHAStoragePlus -x FilesystemMountPoints=/mountpoint
  6. scswitch -Z -g resourcegroup

Filesystem should now be mounted on the defined node.

Recent Changes

Contribute to this wiki

Why not help others by sharing your knowledge? Contribute something to this wiki and join out hall of fame!
Contact us for a user name and password