Red Hat Enterprise Linux 6Cluster AdministrationConfiguring and Managing the High Availability Add-On
IntroductionThis document provides information about installing, configuring and managing Red Hat HighAvailability Add-On components. Red Hat High Ava
min _sco re The minimum score for a node to be considered " alive". If omitted or set to0, the default function, flo o r((n+ 1)/2), is used,
NoteSyncing and activating propagates and activates the updated cluster configuration file.However, for the quorum disk to operate, you must restart t
If you do not specify a multicast address in the cluster configuration file, the Red Hat High AvailabilityAdd-On software creates one based on the clu
Note that this command resets all other properties that you can set with the --setcman option to theirdefault values, as described in Section 5.1.5, “
As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configurationof redundant ring protocol. When using redundant ri
To verify that all of the nodes specified in the hosts cluster configuration file have the identical clusterconfiguration file, execute the following
Chapter 6. Managing Red Hat High Availability Add-On With ccsThis chapter describes various administrative tasks for managing the Red Hat High Availab
You can use the ccs command to stop a cluster by using the following command to stop clusterservices on all nodes in the cluster:ccs -h host --stopall
Chapter 7. Configuring Red Hat High Availability ManuallyThis chapter describes how to configure Red Hat High Availability Add-On software by directly
ImportantCertain procedure in this chapter call for using the cman_to o l versio n -r command topropagate a cluster configuration throughout a cluste
Used to highlight system input, including shell commands, file names and paths. Also used tohighlight keys and key combinations. For example:To see th
2. (O pti o nal ) If you are configuring a two-node cluster, you can add the following line to theconfiguration file to allow a single node to mainta
[root@ example-01 ~]# servi ce cman startStarting cluster: Checking Network Manager... [ OK ] Global setup...
</fencedevices> <rm> </rm></cluster>Examp le 7.2. cl uster.co nf Sample: Basic T wo - N o d e Co n f ig u rat io n<c
The advantage of using the optimized consensus timeout for two-node clusters is that overall failovertime is reduced for the two-node case, since cons
5. Save /etc/cl uster/cl uster.co nf.6. (O pti o nal ) Validate the updated file against the cluster schema (cluster. rng ) byrunning the ccs_co nfi
Example 7.7, “cl uster. co nf: Fencing Nodes with Dual Power Supplies”NoteThe examples in this section are not exhaustive; that is, there may be other
<fence> <method name="APC"> <device name="apc" port="2"/>
<device name="sanswitch1" port="12"/> </method> </fence> <unfence>
</fence> <unfence> <device name="sanswitch1" port="12" action="on"/>
<clusternode name="node-03.example.com" nodeid="3"> <fence> <method name="APC-dual&quo
The mo unt -o remount file-system command remounts the named file system.For example, to remount the /ho me file system, the command is mo unt -o rem
NoteThe failback characteristic is applicable only if ordered failover is configured.NoteChanging a failover domain configuration has no effect on cur
NoteThe number of fail o verd o mai nno d e attributes depends on the number of nodes inthe failover domain. The skeleton fai l o verd o mai n section
<fence> <method name="APC"> <device name="apc" port="3"/>
7.5.1. Adding Clust er ResourcesYou can configure two types of resources:Global — Resources that are available to any service in the cluster. These ar
7. Run the cman_too l versi o n -r command to propagate the configuration to the rest of thecluster nodes.8. Verify that the updated configuration
<apache config_file="conf/httpd.conf" name="example_server" server_root="/etc/httpd" shutdown_wait="0
<ip address="127.143.131.100" monitor_link="yes" sleeptime="10"/> <apache config
Examp le 7.10. cl uster. co nf wit h Services Ad d ed : O n e Usin g G lo b al Reso u rces an dO n e Usin g Service- Sp ecif ic Reso u rces<clus
<ip ref="127.143.131.100"/> <apache ref="example_server"/> </service> <servi
The following example specifies cl usternet-no d e1-eth2 as the alternate name for cluster node cl usternet-no d e1-eth1.<cluster name="myclus
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.NoteNotes are tips, shortcuts or alternative a
<logging> <!-- turning on per-subsystem debug logging --> <logging_daemon name="corosync" debug="on"
<service ... > <fs name="myfs" ... > <nfsserver name="server"> <nfsclient ref=&
Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_
7. If the cluster is running as expected, you are done with creating a configuration file. You canmanage the cluster with command-line tools describe
Chapter 8. Managing Red Hat High Availability Add-On WithCommand Line ToolsThis chapter describes various administrative tasks for managing Red Hat Hi
You can start or stop cluster software on a node according to Section 8.1.1, “ Starting ClusterSoftware” and Section 8.1.2, “ Stopping Cluster Softwa
5. service cman sto pFor example:[root@ example-01 ~]# servi ce rg manager sto pStopping Cluster Service Manager: [ OK ][r
1. At any node, use the cl usvcad m utility to relocate, migrate, or stop each HA service runningon the node that is being deleted from the cluster.
9. If the node count of the cluster has transitioned from greater than two nodes to two nodes,you must restart the cluster software as follows:a. At
Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_
Chapter 1. Red Hat High Availability Add-On Configuration andManagement OverviewRed Hat High Availability Add-On allows you to connect a group of comp
service:example_apache node-01.example.com started service:example_apache2 (none) disabled8.
Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ][root@ example-01 ~]# servi ce cl vmd sto pSignaling clvmd to exit
OK ]Activating VG(s): 2 logical volume(s) in volume group "vg_example" now active
node-01.example.com 1 Online, Local, rgmanager Service Name Owner (Last) State -
<fence> <method name="APC"> <device name="apc" port="1"/>
fstype="ext3"/> <ip address="127.143.131.101" monitor_link="yes" sleeptime="10"/>
<fs ref="web_fs"/> <ip ref="127.143.131.100"/> <apache ref="example_server"
Failed The service is presumed dead. A service is placed into this state whenevera resource's stop operation fails. After a service is placed int
ServiceO p erat io nDescrip t io n Co mman d Syn t axEn ab le Start the service, optionally on apreferred target and optionally accordingto failover
Mig rat e Migrate a virtual machine to anothernode. You must specify a target node.Depending on the failure, a failure tomigrate may result with the v
You can now configure an independent subtree as non-critical, indicating that if the resource failsthen only that resource is disabled. For informatio
8.4 .1. Updat ing a Configurat ion Using cman_to o l versi o n -rTo update the configuration using the cman_to o l versio n -r command, perform the
8. At any node, using the cl ustat utility, verify that the HA services are running as expected. Inaddition, cl ustat displays status of the cluster
8. You may skip this step (restarting cluster software) if you have made only the followingconfiguration changes:Deleting a node from the cluster con
rgmanager Service Name Owner (Last) State ------- ---- ----- ------ -
Chapter 9. Diagnosing and Correcting Problems in a ClusterClusters problems, by nature, can be difficult to troubleshoot. This is due to the increased
Changing the central_pro cessi ng mode for rg manager. For this change to take effect, aglobal restart of rg manag er is required.Changing the multic
As of Red Hat Enterprise Linux 6.1, you can use the following command to verify that all of thenodes specified in the host's cluster configuratio
By default, the /etc/ini t. d /functi o ns script blocks core files from daemons called by /etc/i ni t. d /rg manag er. For the daemon to create appli
9.5. Clust er Services HangWhen the cluster services attempt to fence a node, the cluster services stop until the fence operationhas successfully comp
Ensure that the resources required to run a given service are present on all nodes in the clusterthat may be required to run that service. For example
This document includes a new appendix, Appendix D, Cluster Service Resource Check and FailoverTimeout. This appendix describes how rg manager monitors
The root cause of fences is always a node losing token, meaning that it lost communication withthe rest of the cluster and stopped returning heartbeat
Chapter 10. SNMP Configuration with the Red Hat HighAvailability Add-OnAs of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High Avai
# chkco nfi g fo g ho rn o n# servi ce fog ho rn start6. Execute the following command to configure your system so that the C O R O SY NC -MIBgenera
fenceNo d eName - name of the fenced nodefenceNo d eID - node id of the fenced nodefenceR esult - the result of the fence operation (0 for success, -
co ro syncO bjectsAppName - application nameco ro syncO bjectsAppStatus - new state of the application (co nnected or d i sco nnected )Red Hat Ent er
Chapter 11. Clustered Samba ConfigurationAs of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On providessupport for runn
Before creating the GFS2 file systems, first create an LVM logical volume for each of the file systems.For information on creating LVM logical volumes
In this example, the /d ev/csmb_vg /csmb_l v file system will be mounted at /mnt/gfs2 on allnodes. This mount point must match the value that you spec
CTDB_NODES=/etc/ctdb/nodesCTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addressesCTDB_RECOVERY_LOCK="/mnt/ctdb/.ctdb.lock"CTDB_MANAGES_SAMBA=yesCTD
and 1 fail does this public address become unavailable to clients. All other publicaddresses can only be served by one single node respectively and wi
1.1.4 . New and Changed Feat ures for Red Hat Ent erprise Linux 6.4Red Hat Enterprise Linux 6.4 includes the following documentation and feature upda
[global] guest ok = yes clustering = yes netbios name = csmb-server[csmb] comment = Clustered Samba public = yes path = /mnt/gfs2/share writeable = y
When you see that all nodes are "OK", it is safe to move on to use the clustered Samba server, asdescribed in Section 11.7, “Using the Clust
Fence Device ParametersThis appendix provides tables with parameter descriptions of fence devices. You can configure theparameters with lu ci, by usin
EgeneraBladeFramefence_egenera Table A.9, “ EgeneraBladeFrame”ePowerSwitch fence_eps Table A.10, “ ePowerSwitch”Fence kdump fence_kdump Table A.11, “
RHEV-M RESTAPIfence_rhevm Table A.23, “ RHEV-M RESTAPI (RHEL 6.2 and lateragainst RHEV 3.0 and later)”SCSI Fencing fence_scsi Table A.24, “ SCSI Reser
Path to SSHIdentity Filei d enti ty_fil eThe identity file for SSH.lu ci Field cl uster.co nf At t rib u t eDescrip t io nTable A.3, “ APC Power Switc
Port (Outlet)Numberpo rt The port.Delay(optional)d el ay The number of seconds to wait before fencing is started. Thedefault value is 0.lu ci Field cl
Unfencing unfencesection of theclusterconfigurationfileWhen enabled, this ensures that a fenced node is not re-enableduntil the node has been rebooted
Power Timeout(seconds)po wer_timeoutNumber of seconds to wait before testing for a status changeafter issuing a power on or power on command. The defa
Delay(optional)d el ay The number of seconds to wait before fencing is started. Thedefault value is 0.lu ci Field cl uster.co nf At t rib u t eDescrip
1.1.6. New and Changed Feat ures for Red Hat Ent erprise Linux 6.6Red Hat Enterprise Linux 6.6 includes the following documentation and feature updat
T ab le A.8. Eat o n Net wo rk Po wer Co n t ro ller ( SNMP In t erf ace) ( Red Hat En t erp rise Lin u x6 .4 an d lat er)lu ci Field cl uster.c
T ab le A.9 . Eg en era Blad eFramelu ci Field cl uster.co nf At t rib u t eDescrip t io nName name A name for the Egenera BladeFrame device connected
Table A.11, “ Fence kdump” lists the fence device parameters used by fence_kd ump, the fence agentfor kdump crash recovery service. Note that fence_k
IP Address orHostnamei pad d r The hostname assigned to the device.Login l o g i n The login name used to access the device.Password passwd The passwo
Missing portreturns OFFinstead offailuremi ssi ng _as_o ffMissing port returns OFF instead of failure.Power Wait(seconds)po wer_wai t Number of second
Login Timeout(seconds)l o g i n_ti meo utNumber of seconds to wait for a command prompt after login.The default value is 5.Times to RetryPower OnOpera
lu ci Field cl uster.co nf At t rib u t eDescrip t io nName name A name for the IBM BladeCenter device connected to the cluster.IP Address orHostnamei
SNMPCommunityco mmunity The SNMP community string.SNMP SecurityLevelsnmp_sec_l evelThe SNMP security level (noAuthNoPriv, authNoPriv, authPriv).SNMPAu
SNMP Version snmp_versi onThe SNMP version to use (1, 2c, 3); the default value is 1.SNMPCommunityco mmunity The SNMP community string; the default va
SNMP Version snmp_versi onThe SNMP version to use (1, 2c, 3); the default value is 1.SNMPCommunityco mmunity The SNMP community string.SNMP SecurityLe
Fencing device — A fencing device is required. A network power switch is recommended toperform fencing in an enterprise-level cluster. For information
SNMP Version snmp_versi onThe SNMP version to use (1, 2c, 3); the default value is 1.SNMPCommunityco mmunity The SNMP community string; the default va
PasswordScript(optional)passwd _scriptThe script that supplies a password for access to the fencedevice. Using this supersedes the Passwo rd paramete
Port (Outlet)Numberpo rt Physical plug number or name of virtual machine.Delay(optional)d el ay The number of seconds to wait before fencing is starte
Table A.25, “ VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)” lists thefence device parameters used by fence_vmware_soap, th
Forcecommandpromptcmd _pro mpt The command prompt to use. The default value is [’RSM>’,’>MPC’, ’IPS>’, ’TPS>’, ’NBB>’, ’NPS>’, ’VMR&
HA Resource ParametersThis appendix provides descriptions of HA resource parameters. You can configure the parameterswith lu ci, by using the ccs comm
SAP Instance SAPInstance Table B.21, “SAP Instance (SAP InstanceResource)”Samba Server samba.sh Table B.22, “ Samba Server (samba Resource)”Script scr
lu ci Field cl uster.co nf At t rib u t eDescrip t io nName name Specifies a name for the file system resource.FilesystemTypefstype If not specified,
Device, FSLabel, or UUIDd evi ce The device file associated with the file system resource.FilesystemTypefstype Set to GFS2 on lu ciMount Options o pti
Number ofSeconds toSleep AfterRemoving anIP Addresssl eeptime Specifies the amount of time (in seconds) to sleep.lu ci Field cl uster.co nf At t rib u
Note that installing only the rg manag er will pull in all necessary dependencies to create an HAcluster from the HighAvailability channel. The l vm2-
Use SimplifiedDatabaseBackendnamed _sd b If enabled, specifies to use the Simplified Database Backend.OtherCommand-Line Optionsnamed _o ptio nsAdditio
lu ci Field cl uster.co nf At t rib u t eDescrip t io nName name This is a symbolic name of a client used to reference it in theresource tree. This is
Name name Descriptive name of the NFS server resource. The NFS serverresource is useful for exporting NFSv4 file systems to clients.Because of the way
OracleInstallationTypetype The Oracle installation type.Default: 10 gbase: D atabase Instance and Listener onlybase-11g : Oracle11g Database Instance
OracleApplicationHomeDirectoryho me This is the Oracle (application, not user) home directory. It isconfigured when you install Oracle.TNS_ADMIN(optio
File Name ofthe JDBCDriverD B_JARS File name of the JDBC driver.Path to a Pre-Start ScriptP R E_ST AR T _USEREXITPath to a pre-start script.Path to a
NoteRegarding Table B.22, “ Samba Server (samba Resource)”, when creating or editing a clusterservice, connect a Samba-service resource directly to th
SYBASE_OCSDirectoryNamesybase_o cs The directory name under sybase_home where OCS productsare installed. For example, ASE-15_0.Sybase User sybase_user
AutomaticallyStart ThisServiceautostart If enabled, this virtual machine is started automatically after thecluster forms a quorum. If this parameter i
StatusProgramstatus_pro gramStatus program to run in addition to the standard check for thepresence of a virtual machine. If specified, the status pro
Notesystem-co nfi g -cl uster is not available in Red Hat Enterprise Linux 6.Upgradin g Red Hat High Availabilit y Add- O n Soft ware17
HA Resource BehaviorThis appendix describes common behavior of HA resources. It is meant to provide ancillaryinformation that may be helpful in config
A cluster service is an integrated entity that runs under the control of rg manager. All resources in aservice run on the same node. From the perspect
/etc/cl uster/cl uster.co nf. In addition, non-typed child resources are started after all typedchild resources have started and are stopped before an
Ordering within a resource type is preserved as it exists in the cluster configuration file, /etc/cl uster/cl uster.co nf. For example, consider the s
3. fs: 1 — This is a File System resource. If there were other File System resources in Servicefoo, they would stop in the reverse order listed in th
4. ip: 10 . 1. 1. 1 — This is an IP Address resource. If there were other IP Address resources inService foo, they would start in the order listed in
C.3. Inherit ance, t he <resources> Block, and Reusing ResourcesSome resources benefit by inheriting values from a parent resource; that is comm
The service would need four nfsclient resources — one per file system (a total of two for filesystems), and one per target machine (a total of two for
In some circumstances, if a component of a service fails you may want to disable only thatcomponent without disabling the entire service, to avoid aff
Display thestart andstopordering ofa service.Display start order:rg _test noo p /etc/cluster/cluster. co nf start servi ce servicenameDisplay stop ord
Chapter 2. Before Configuring the Red Hat High Availability Add-OnThis chapter describes tasks to perform and considerations to make before installing
Cluster Service Resource Check and Failover TimeoutThis appendix describes how rg manager monitors the status of cluster resources, and how tomodify t
on each resource in a service individually by adding __enfo rce_ti meo uts= "1" to the reference inthe cl uster.co nf file.The following exa
Command Line Tools SummaryTable E.1, “ Command Line Tool Summary” summarizes preferred command-line tools for configuringand managing the High Availa
High Availability LVM (HA-LVM)The Red Hat High Availability Add-On provides support for high availability LVM volumes (HA-LVM)in a failover configurat
described in Section F.2, “ Configuring HA-LVM Failover with Tagging” .F.1. Configuring HA-LVM Failover wit h CLVM (preferred)To set up HA-LVM failove
<lvm ref="lvm"/> <fs ref="FS"/> </service></rm>F.2. Configuring HA-LVM Failover wit h T agg
NoteIf there are multiple logical volumes in the volume group, then the logical volume name(lv_name) in the l vm resource should be left blank or unsp
Revision HistoryRevisio n 7.0- 13 Wed O ct 8 2014 St even Levin eVersion for 6.6 GA releaseRevisio n 7.0- 12 Fri Sep 26 2014 St even Levin eRe
Version for 6.5 GA releaseRevisio n 6 .0- 20 Wed No v 6 2013 St even Levin eResolves: #986462Updates oracledb resource table.Revisio n 6 .0- 16 T
Resolves: 894097Removes advice to ensure you are not using VLAN tagging.Resolves: 845365Indicates that bonding modes 0 and 2 are now supported.Revisio
Only single site clusters are fully supported at this time. Clusters spread across multiplephysical locations are not formally supported. For more det
Revisio n 5.0- 12 T h u N o v 1 2012 St even Levin eAdded newly-supported fence agents.Revisio n 5.0- 7 T h u O ct 25 2012 St even Levin eAdded
Resolves: 771447, 800069, 800061Updates documentation of lu ci to be consistent with Red Hat Enterprise Linux 6.3 version.Resolves: 712393Adds informa
Initial revision for Red Hat Enterprise Linux 6.2 Beta releaseResolves: #739613Documents support for new ccs options to display available fence device
Initial revision for Red Hat Enterprise Linux 6.1Resolves: #671250Documents support for SNMP traps.Resolves: #659753Documents ccs command.Resolves: #6
- configuring, Configuring ACPI For Use with Integrated Fence DevicesAPC p o wer swit ch o ver SNMP f en ce d evice , Fen ce Device Paramet ersAPC p
- restarting a cluster, Starting, Stopping, Restarting, and Deleting Clusters- ricci considerations, Considerations for ricci- SELinux, Red Hat High A
Eg en era B lad eFrame f en ce d evice , Fen ce Device Paramet ersePo werSwit ch f en ce d evice , Fen ce Device Paramet ersFf ailo ver t imeo u t ,
- HP iLO MP, Fence Device Parameters- HP iLO2, Fence Device Parameters- HP iLO3, Fence Device Parameters- HP iLO4, Fence Device Parameters- IBM BladeC
f en ce_wt i f en ce ag en t , Fen ce Device Paramet ersFujit su Siemen s Remot eview Service Bo ard ( R SB ) f en ce d evice, Fen ce DeviceParamet
- considerations for using with network switches and multicast addresses, MulticastAddressesmult icast t raf f ic, en ab lin g , Co n f ig u rin g t
The High Availability Add-On supports both IPv4 and IPv6 Internet Protocols. Support ofIPv6 in the High Availability Add-On is new for Red Hat Enterpr
t o t em tag- consensus value, The consensus Value for totem in a Two-Node Clustert ro u b lesh o o t in g- diagnosing and correcting problems in a cl
T ab le 2.2. En ab led IP Po rt o n a Co mpu t er T h at R u n s lu ciIP Po rt N u mb er Pro t o co l Co mpo n en t8084 TCP lu ci (Co n g a user
After executing these commands, run the following command to save the current configuration for thechanges to be persistent during reboot.$ servi ce i
For more complete information on the parameters you can configure with the /etc/sysco nfi g /l uci file, refer to the documentation within the file i
ImportantThis method completely disables ACPI; some computers do not boot correctly if ACPI iscompletely disabled. Use this method only if the other m
NoteDisabling ACPI Soft-Off with the BIOS may not be possible with some computers.You can disable ACPI Soft-Off by configuring the BIOS of each cluste
Red Hat Enterprise Linux 6 Cluster AdministrationConfiguring and Managing the High Availability Add-On
| x KB Power ON Password Enter | || x Hot Key Power ON Ctrl-F1 | ||
# initrd /initrd-[generic-]version.img#boot=/dev/hdadefault=0timeout=5serial --unit=0 --speed=115200terminal --timeout=5 serial consoletitle
Fig u re 2.1. Web Server Clu st er Service ExampleClients access the HA service through the IP address 10.10.10.201, enabling interaction with the we
specialized services to clients. An HA service is represented as a resource tree in the clusterconfiguration file, /etc/cl uster/cluster. co nf (in ea
Invalid XML — Example 2.4, “cl uster. co nf Sample Configuration: Invalid XML”Invalid option — Example 2.5, “ cl uster. co nf Sample Configuration: In
<rm> </rm><cluster> <----------------INVALIDIn this example, the last line of the configuration (annotated as "
<fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm
ImportantOverall, heuristics and other q d i skd parameters for your deployment depend on the siteenvironment and special requirements needed. To und
NoteUsing JBOD as a quorum disk is not recommended. A JBOD cannot providedependable performance and therefore may not allow a node to write to it qui
You can configure the Red Hat High-Availability Add-On to use UDP unicast by setting the cman transpo rt= "ud pu" parameter in the cl uster.
Legal NoticeCopyright © 20 14 Red Hat, Inc. and others.This do cument is licensed by Red Hat under the Creative Co mmons Attributio n-ShareAlike 3.0
Chapter 3. Configuring Red Hat High Availability Add-On WithCongaThis chapter describes how to configure Red Hat High Availability Add-On software usi
7. Creating resources. Refer to Section 3.9, “Configuring Global Cluster Resources” .8. Creating cluster services. Refer to Section 3.10, “ Adding a
https://luci_server_hostname:luci_server_port. The default value ofluci_server_port is 8084 .The first time you access lu ci, a web browser specific p
lu ci for the first time.As of Red Hat Enterprise Linux 6.4, the root user or a user who has been granted lu ciadministrator permissions can also use
C an Stop, Start, and R ebo o t C l uster No d esAllows the user to manage the individual nodes of a cluster, as described in Section 4.3,“ Managing
Fig u re 3.3. lu ci clu st er creat io n d ialo g b o x3. Enter the following parameters on the Creat e New Clu st er dialog box, as necessary:At t
NoteWhether you select the Use lo cally in st alled p ackag es or the Do wn lo adPackag es option, if any of the base cluster components are missing
information on deleting a node from an existing cluster that is currently in operation, seeSection 4.3.4, “ D eleting a Member from a Cluster”.Warning
NoteFor more information about Po st Jo in Delay and Po st Fail D elay, refer to the fenced(8)man page.3.5.3. Net work Configurat ionClicking on th
As of the Red Hat Enterprise Linux 6.2 release, the nodes in a cluster can communicate with eachother using the UD P Unicast transport mechanism. It i
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Sp ecif y Ph ysicalDevice: By DeviceLab elSpecifies the quorum disk label created by the mkq d i sk utility. If this fieldis used, the quorum daemon r
Co n f ig u rat io n page. After selecting the daemon, you can check whether to log the debuggingmessages for that particular daemon. You can also sp
Fig u re 3.5. lu ci f en ce d evices co n f ig u rat io n p ag e3.6.1. Creat ing a Fence DeviceTo create a fence device, follow these steps:1. From
NoteFence devices that are in use cannot be deleted. To delete a fence device that a node iscurrently using, first update the node fence configuration
4. Enter a Met h o d N ame for the fencing method that you are configuring for this node. This isan arbitrary name that will be used by Red Hat High
You can continue to add fencing methods as needed. You can rearrange the order of fencingmethods that will be used for this node by clicking on Mo ve
Fig u re 3.6 . D u al- Po wer Fen cin g Co n f ig u rat io n3.7.4 . T est ing t he Fence Configurat ionAs of Red Hat Enterprise Linux Release 6.4, yo
Restricted — Allows you to restrict the members that can run a particular cluster service. If none ofthe members in a restricted failover domain are a
Section 3.8.1, “Adding a Failover Domain”Section 3.8.2, “ Modifying a Failover Domain”Section 3.8.3, “ Deleting a Failover Domain”3.8.1. Adding a Fail
4. To enable setting failover priority of the members in the failover domain, click the Prio rit iz edcheckbox. With Prio rit iz ed checked, you can
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To add a global cluster resource, follow the steps in this section. You can add a resource that islocal to a particular service when you configure the
3. On the Ad d Servi ce G ro up to C l uster dialog box, at the Service Name text box, typethe name of the service.NoteUse a descriptive name that
When adding a resource to a service, whether it is an existing global resource or aresource available only to this service, you can specify whether th
NoteTo verify the existence of the IP service resource used in a cluster service, you can use the /sbin/i p ad d r sho w command on a cluster node (ra
Chapter 4. Managing Red Hat High Availability Add-On WithCongaThis chapter describes various administrative tasks for managing Red Hat High Availabili
3. Click Remove. The system will ask you to confirm whether to remove the cluster from the lu cimanagement GUI.For information on deleting a cluster
1. From the cluster-specific page, click on No d es along the top of the cluster display. Thisdisplays the nodes that constitute the cluster. This is
7. When the process of adding a node is complete, click on the node name for the newly-addednode to configure fencing for this node, as described in
To start a cluster, perform the following steps:1. Select all of the nodes in the cluster by clicking on the checkbox next to each node.2. Select th
Delet in g a service — To delete any services that are not currently running, select any servicesyou want to disable by clicking the checkbox for tha
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Execute service l uci restore-d b /var/li b/l uci /d ata/lucibackupfile wherelucibackupfile is the backup file to restore.For example, the follow
Chapter 5. Configuring Red Hat High Availability Add-On With theccs CommandAs of the Red Hat Enterprise Linux 6.1 release and later, the Red Hat High
ImportantThis chapter references commonly used cl uster. co nf elements and attributes. For acomprehensive list and description of cl uster. co nf ele
5.1.2. Viewing t he Current Clust er Configurat ionIf at any time during the creation of a cluster configuration file you want to print the current fi
--setrm--setcman--setmul ti cast--setal tmul ti cast--setfenced aemon--setl o g g i ng--setq uo rumdFor example, to reset all of the fence deamon prop
2. Creating a cluster. Refer to Section 5.4, “Creating and Modifying a Cluster”.3. Configuring fence devices. Refer to Section 5.5, “ Configuring Fe
ccs -h node-01.example.com --createcluster myclusterThe cluster name cannot exceed 15 characters.If a cl uster.co nf file already exists on the host t
</fencedevices> <rm> </rm></cluster>When you add a node to the cluster, you can specify the number of votes the nodecon
For example, to configure a value for the po st_fai l _d el ay attribute, execute the followingcommand. This command will overwrite the values of all
ccs -h host --rmfencedev fence_device_nameFor example, to remove a fence device that you have named myfence from the cluster configurationfile on clus
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
fence_vmware - Fence agent for VMwarefence_vmware_soap - Fence agent for VMware over SOAP APIfence_wti - Fence agent for WTIfence_xvm - Fence agent fo
NoteIt is recommended that you configure multiple fencing mechanisms for each node. A fencingdevice can fail due to network split, a power outage, or
option, as described in Section 5.5, “ Configuring Fence Devices” . Each node is configured with aunique APC switch power port number: The port number
sync the cluster configuration file to all of the nodes, as described in Section 5.15, “ Propagating theConfiguration File to the Cluster Nodes” .5.7.
unique SAN physical port number: The port number for no d e-0 1. example. co m is 11, the portnumber for no d e-0 2. example. co m is 12, and the port
<unfence> <device name="sanswitch1" port="13" action="on"/> </unfence>
ccs -h node01.example.com --addmethod APC node01.example.com2. Add a fence instance for the primary method. You must specify the fence device to use
Example 5.4, “ cl uster. co nf After Adding Backup Fence Methods ” shows a cl uster.co nfconfiguration file after you have added a power-based primar
<rm> </rm></cluster>Note that when you have finished configuring all of the components of your cluster, you will need tosync th
3. Add a fence instance for the first power supply to the fence method. You must specify thefence device to use for the node, the node this instance
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ccs -h host --addfenceinst fencedevicename node method [options] action=onFor example, to configure a second fence instance in the configuration file
<fencedevice agent="fence_apc" ipaddr="apc_ip_example" login="login_example" name="apc2" passwd=&quo
A failover domain is a named subset of cluster nodes that are eligible to run a cluster service in theevent of a node failure. A failover domain can h
NoteTo configure a preferred member, you can create an unrestricted failover domain comprisingonly one cluster member. Doing that causes a cluster ser
ccs -h host --rmfailoverdomainnode failoverdomain nodeNote that when you have finished configuring all of the components of your cluster, you will nee
NoteUse a descriptive name that clearly distinguishes the service from other services in thecluster.When you add a service to the cluster configuratio
options. For example, if you had not previously defined web_fs as a global service, youcould add it as a service-specific resource with the following
To remove a service and all of its subservices, execute the following command:ccs -h host --rmservice servicenameTo remove a subservice, execute the f
To print a list of the options you can specify for a particular service type, execute the followingcommand:ccs -h host --lsserviceopts service_typeFor
A virtual machine resource requires at least a name and a path attribute. The name attribute shouldmatch the name of the l i bvi rt domain and the pat
Commenti su questo manuale