Skip to main content

RedHat Cluster Managment

RedHat Cluster Suite And Conga - Linux Clustering

This how to describes an easy step by step installation of the RedHat Cluster Suite on three CentOS nodes and prepare them as nodes of a cluster. You will also install the Management suite which is web based and is known as Conga.

You will use three nodes to form the cluster and one node as the cluster management node and as a cluster node it will not take part. All the nodes and the management node should be resolvable either by host file entries or by DNS.

Cluster Nodes:

cnode1:
eth0-192.168.2.151/24 - external-lan
eth1-192.168.1.200/26 - internal-lan cluster
cnode2:
eth0-192.168.2.152/24 - external-lan
eth1-192.168.1.201/26 - internal-lan cluster
cnode3:
eth0-192.168.2.153/24 - external-lan
eth1-192.168.1.202/26 - internal-lan cluster

Cluster Management Node:

centos:
eth0-192.168.2.150/24

As the cluster, its management interface and the service deamons use tcp, for the purpose of this article you can disable the firewalls at these nodes.

OS - All Nodes:
CentOS 6 Minimal

Cluster Nodes - Software Installation:

yum groupinstall "High Availability"
yum install ricci

Cluster Management Node - Software Installation:

yum groupinstall "High Availability Management"
yum install ricci

Copy this initial sample cluster config file into /etc/cluster/cluster.conf at all the nodes cnode1, cnode2, cnode3.








This initial file states that the cluster name is cl1 and defines the cluster nodes.

Now some services have to be configured and started at the nodes first and then at the management node as below.

Cluster Nodes:

chkconfig iptables off
chkconfig ip6tables off
chkconfig ricci on
chkconfig cman on
chkconfig rgmanager on
chkconfig modclusterd on

Create a password for the ricci service user with

passwd ricci

service iptables stop
service ip6tables stop
service ricci start
service cman start
service rgmanager start
service modclusterd start

Cluster Management Node:

chkconfig iptables off
chkconfig ip6tables off
chkconfig luci on
chkconfig ricci on

service iptables stop
service ip6tables stop
service luci start
service ricci start

luci service is the management service that presents the web based cluster interface via https at port 8084 and can be accessed in any browser at
https:///

ricci service is the underlying daemon that helps in cluster configuration sync and file copy, service start, stop etc. and uses tcp port 11111.

cman, rgmanager and modclusterd are the actual cluster services which futher start other services that actually make the clustering happen and keep it live.

Open a browser and enter the conga node url which in tis case is https://centos:8084/

After clicking 'ok' to the initial warning information you will be presented with the login screen. Enter the root user and root password of that system and start the interface.

Now click Add cluster and add the first node cnode1 and the ricci password, click 'ok' and it will detect the other two nodes also, add the ricci passwords and the cluster will be added to the Cluster Management interface. The cluster can be managed and configured from this interface. Care should be taken as the cluster.conf file sometimes does not get synced to all cluster nodes, they will get fenced due to version misconfiguration. At such times copy the cluster.conf file from node1 to all the other nodes. If all the nodes are in sync then the uptime is shown in the cluster nodes list.

Getting a cluster, configuring and managing, your cluster is up, live and configured in no time and later other clusters can be added into this management interface for easy maintenance.

Comments

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image downloaded we can start a contai

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --now cockpit.socket If firewall is runnin

Containers Without Docker on RHEL/Fedora

Docker is perfectly doing well with the containerization. Since docker uses the Server/Client architecture to run the containers. So, even if I am a client or developer who just wants to create a docker image from Dockerfile I need to start the docker daemon which of course generates some extra overhead on the machine.  Also, a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Here now the solution is available where we do not need to start the daemon to create the containers. We can create the images and push them any of the repositories and images are fully compatible to run on any of the environment.  Podman is an open-source Linux tool for working with containers. That includes containers in registries such as docker.io and quay.io. let's start with the podman to manage the containers.  Install the package  [root@rhel8 ~] # dnf install podman -y  OR [root@rhel8 ~] # yum