Skip to main content

Gluster File System Configuring on CentOS 6.4


#################GlusterFS Demonstration#############################

Introduction:- When you are Managing  a small or Medium network or an enterprise network for a large company the data storage is always a concern. It can be in terms of lack of disk space or inefficient backup solution. In that case GlusterFS is very useful tool.

What is GlusterFS:- GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions.

Setup for GlusterFS:- GlusterFS can be installed and used on any Linux distribution, Right now I am going to use Centos 6.4. Same process for other distribution only the installation process is different.
In my setup I am using below configuration  or setup.

Storage server:-
Node1.linux.vs (192.168.160.1)
Node2.linux.vs (192.168.160.2)
Node3.linux.vs(192.168.160.3)
Node4.linux.vs(192.168.160.4)
Client to use gluster file system:-
Client.linux.vs (192.168.160.10)
On each server create the lvm partition and format them using xfs filesystem, however we can use the another filesystem as well (ext3,4) and Name resolution should be working.


How to create Lvm:-

# fdisk /dev/sdb
# pvcreate /dev/sdb1
# vgcreate vg_storage /dev/sdb1
# lvcreate –L 10G –n lv_home vg_storage
# mkfs.xfs –i size=512 /dev/vg_storage/lv_home
# mkdir /mnt/data/node1
# mount /dev/vg_storage/lv_home /mnt/data/node1

Note:- Same prcess follow on other storage nodes.

Installation of glusterFs


# yum install glusterfs*

Below list of installed packages
glusterfs-geo-replication-3.2.7-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-fuse-3.2.7-1.el6.x86_64
glusterfs-rdma-3.2.7-1.el6.x86_64
glusterfs-3.2.7-1.el6.x86_64
glusterfs-devel-3.2.7-1.el6.x86_64
glusterfs-server-3.2.7-1.el6.x86_64

Note:- Install the gluster packages on each storage node.

After installation, it's time to start the glusterd service on each storage node.

# /etc/init.d/glusterd status
# /etc/init.d/glusterd start          
# chkconfig glusterd on

Setting up trusted storage server pools

#gluster peer status

Command show the status of trusted storage pool
Because still we do not add any storage server. So the will be empty.

# gluster peer probe node2.linux.vs
# gluster peer probe node3.linux.vs
# gluster peer probe node4.linux.vs

Note:- We don’t need to add node1 because we are on this node and it will automatically part of storage cluster.

# gluster peer status

Number of Peers: 3
Hostname: node2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)


  •    Setting Up GlusterFS server Volume:-
       Storage Volumes of the following types can be created in your storage        environment:


  • Distributed - Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.
  • Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.
  • Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.
  • Distributed Striped - Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.
  • Distributed Replicated - Distributed replicated volumes distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.
  • Distributed Striped Replicated – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.
  • Striped Replicated – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

# gluster volume create linux-vs node1:/mnt/data/node1 node2:/mnt/data/node2 node3:/mnt/data/node3 node4:/mnt/data/node4

# gluster volume info

Volume Name: linux-vs
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/mnt/data/node1
Brick2: node2:/mnt/data/node2
Brick3: node3:/mnt/data/node3
Brick4: node4:/mnt/data/node4

# gluster volume start linux-vs

Now our gluster storage server ready now we can mount it on client side.
Information about peer,node,vols available

# cd /var/lib/glusterd

Client side.

# showmount –e node1

It will show you volume name which we shared from gluster storage servers.

# mkdir /storage

# mount.glusterfs node1:/linux-vs  /storage 

# touch /storage/file{1,2,3,4}

Now the file will available on node1,node2,node3,node4
because it is distributed storage volume. 

Soon we publish the migration and managing the Storage volumes. 
thanks



Comments

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image downloaded we can start a contai

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --now cockpit.socket If firewall is runnin

Containers Without Docker on RHEL/Fedora

Docker is perfectly doing well with the containerization. Since docker uses the Server/Client architecture to run the containers. So, even if I am a client or developer who just wants to create a docker image from Dockerfile I need to start the docker daemon which of course generates some extra overhead on the machine.  Also, a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Here now the solution is available where we do not need to start the daemon to create the containers. We can create the images and push them any of the repositories and images are fully compatible to run on any of the environment.  Podman is an open-source Linux tool for working with containers. That includes containers in registries such as docker.io and quay.io. let's start with the podman to manage the containers.  Install the package  [root@rhel8 ~] # dnf install podman -y  OR [root@rhel8 ~] # yum