Skip to main content

Installation & Configuration of "High Availability Cluster (Heartbeat)" In RHEL 6

Installation & Configuration of "High Availability Cluster (Heartbeat)" on RHEL 6
Node names are, station1 & station2
Give below command to set the host name on both Linux machines.
“hostname station1”
IP Address of “station1” is 192.168.1.1
“hostname station2”
IP Address is “station2” is 192.168.1.2
Edit “/etc/sysconfig/network” file & change the HOSTNAME to “station1 & station2”
Note:
- “uname -a” command should return “station1” & “station2” names respectively after rebooting both
machines.
- No need to mention VIP (Virtual IP) anywhere in Linux Network & Host configuration. Heartbeat will
manage the VIP during “fail over” process in between “station1 & station2”
Now, We are going to install “hearbeat”.
Step:1
yum install heartbeat
Total download size: 420 k
Installing:
heartbeat
i686
Installing for dependencies:
heartbeat-libs
i686
3.0.4-1.el6
3.0.4-1.el6
epel
epel
Show dependencies
Installing : heartbeat-3.0.4-1.el6.i686
Installing : heartbeat-libs-3.0.4-1.el6.i686
Installing : heartbeat-devel-3.0.4-1.el6.i686
Step:2
Install below RPM's if you find "dependency problem".
cluster-glue-libs-1.0.5-2.el6.i686.rpm
cluster-glue-1.0.5-2.el6.i686.rpm
resource-agents-3.9.2-7.el6.i686.rpm
Visit : http://www.linux-ha.org/wiki/Downloads to download necessary packages.
Step:3
161 k
260 k
Copy “hearbeat configuration” files from below location.
cp /usr/share/doc/heartbeat-3.0.4/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-3.0.4/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-3.0.4/haresources /etc/ha.d/
Step:4
Now, We need to modify “/etc/ha.d/” directory files as given below. It is better to “comment” all existing
lines prior to apply below configuration on both servers.
1) In “ha.cf”, write below contents;
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
Udpport 694
auto_failback on
node station1
node station2
#Your log file generator path#
#Keep alive the connection for 2 seconds#
#Heartbeat will consider local node dead after 30 seconds#
#Maximum seconds to wait for dead signal of dead node#
#Broadcast the message about the dead node from this interface# Change your
interface accordingly.
#Broadcast UDP Port#
#The Dead node will take the charge again#
#node1#
#node2#
2) In “haresources”, write below contents;
station1 192.168.1.4 httpd
#Must be same on both nodes#
3) In “authkeys”, write below contents;
auth 2
2 sha1 test-ha
Step:5
We will use “Apache” HTTPD Service to test our configuration.
Open “/etc/httpd/conf/httpd.conf” file & modify below line.
Listen 192.168.1.4:80
#192.168.1.4 is a VIP & Apache will listen port 80#
Save & Exit
Step:6
Finally, Start “heartbeat” service on “station1”, then on “station2”. No need to start HTTPD Service as
“Heartbeat” is responsible to bring it up.
/etc/init.d/heartbeat start
Step:7
Create sample “index” file on Apache “DocumentRoot” as per your need.
In your browser, type “192.168.1.4” ip address,which will give you “station1” index file.
Here, “station1” is a primary node, so heartbeat will show it's index page only.
Now, stop “heartbeat” on “station1”. After few seconds, type “192.168.1.4” ip address again, which will
show you index page of “station2”.
Step:8
Enable “heartbeat” service during startup. No need to enable “httpd” on both nodes.
chkconfig heartbeat on
chkconfig httpd off
You will find “heartbeat logs” in “/var/log/ha-log” file.


Check This Diagram..

http://www.4shared.com/photo/jPawdOoQ/Screenshot_from_2012-06-24_125.html


Comments

Post a Comment

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image download...

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --n...

Add The Group Information IN Yum Repository in simple Two steps

= Yum groups and repositories = Yum supports the group commands   * grouplist   * groupinfo   * groupinstall   * groupremove   * groupupdate Groups are read from the "group" xml metadata that is optionally available from each repository. If yum has no repositories which support groups then none of  the group operations will work.  #yum grouplist    This will list the installed and available groups for your system in two    separate lists. If you pass the optional 'hidden' argument then all of     the groups which are set to 'no' in the group xml tag.   yum groupinfo groupname     This will give you detailed information for each group including:   description, mandatory, default and optional packages.       #yum groupinstall groupname      #yum groupupdate groupname   Despite their differing names both of these commands perform the same   func...