Sunday, April 21, 2013

Gluster File System Configuring on CentOS 6.4


#################GlusterFS Demonstration#############################

Introduction:- When you are Managing  a small or Medium network or an enterprise network for a large company the data storage is always a concern. It can be in terms of lack of disk space or inefficient backup solution. In that case GlusterFS is very useful tool.

What is GlusterFS:- GlusterFS is an open source, clustered file system capable of scaling to several petabytes and handling thousands of clients. GlusterFS can be flexibly combined with commodity physical, virtual, and cloud resources to deliver highly available and performant enterprise storage at a fraction of the cost of traditional solutions.

Setup for GlusterFS:- GlusterFS can be installed and used on any Linux distribution, Right now I am going to use Centos 6.4. Same process for other distribution only the installation process is different.
In my setup I am using below configuration  or setup.

Storage server:-
Node1.linux.vs (192.168.160.1)
Node2.linux.vs (192.168.160.2)
Node3.linux.vs(192.168.160.3)
Node4.linux.vs(192.168.160.4)
Client to use gluster file system:-
Client.linux.vs (192.168.160.10)
On each server create the lvm partition and format them using xfs filesystem, however we can use the another filesystem as well (ext3,4) and Name resolution should be working.


How to create Lvm:-

# fdisk /dev/sdb
# pvcreate /dev/sdb1
# vgcreate vg_storage /dev/sdb1
# lvcreate –L 10G –n lv_home vg_storage
# mkfs.xfs –i size=512 /dev/vg_storage/lv_home
# mkdir /mnt/data/node1
# mount /dev/vg_storage/lv_home /mnt/data/node1

Note:- Same prcess follow on other storage nodes.

Installation of glusterFs


# yum install glusterfs*

Below list of installed packages
glusterfs-geo-replication-3.2.7-1.el6.x86_64
glusterfs-vim-3.2.7-1.el6.x86_64
glusterfs-fuse-3.2.7-1.el6.x86_64
glusterfs-rdma-3.2.7-1.el6.x86_64
glusterfs-3.2.7-1.el6.x86_64
glusterfs-devel-3.2.7-1.el6.x86_64
glusterfs-server-3.2.7-1.el6.x86_64

Note:- Install the gluster packages on each storage node.

After installation, it's time to start the glusterd service on each storage node.

# /etc/init.d/glusterd status
# /etc/init.d/glusterd start          
# chkconfig glusterd on

Setting up trusted storage server pools

#gluster peer status

Command show the status of trusted storage pool
Because still we do not add any storage server. So the will be empty.

# gluster peer probe node2.linux.vs
# gluster peer probe node3.linux.vs
# gluster peer probe node4.linux.vs

Note:- We don’t need to add node1 because we are on this node and it will automatically part of storage cluster.

# gluster peer status

Number of Peers: 3
Hostname: node2
Uuid: 5e987bda-16dd-43c2-835b-08b7d55e94e5
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 1e0ca3aa-9ef7-4f66-8f15-cbc348f29ff7
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: 3e0caba-9df7-4f66-8e5d-cbc348f29ff7
State: Peer in Cluster (Connected)


  •    Setting Up GlusterFS server Volume:-
       Storage Volumes of the following types can be created in your storage        environment:


  • Distributed - Distributed volumes distributes files throughout the bricks in the volume. You can use distributed volumes where the requirement is to scale storage and the redundancy is either not important or is provided by other hardware/software layers.
  • Replicated – Replicated volumes replicates files across bricks in the volume. You can use replicated volumes in environments where high-availability and high-reliability are critical.
  • Striped – Striped volumes stripes data across bricks in the volume. For best results, you should use striped volumes only in high concurrency environments accessing very large files.
  • Distributed Striped - Distributed striped volumes stripe data across two or more nodes in the cluster. You should use distributed striped volumes where the requirement is to scale storage and in high concurrency environments accessing very large files is critical.
  • Distributed Replicated - Distributed replicated volumes distributes files across replicated bricks in the volume. You can use distributed replicated volumes in environments where the requirement is to scale storage and high-reliability is critical. Distributed replicated volumes also offer improved read performance in most environments.
  • Distributed Striped Replicated – Distributed striped replicated volumes distributes striped data across replicated bricks in the cluster. For best results, you should use distributed striped replicated volumes in highly concurrent environments where parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.
  • Striped Replicated – Striped replicated volumes stripes data across replicated bricks in the cluster. For best results, you should use striped replicated volumes in highly concurrent environments where there is parallel access of very large files and performance is critical. In this release, configuration of this volume type is supported only for Map Reduce workloads.

# gluster volume create linux-vs node1:/mnt/data/node1 node2:/mnt/data/node2 node3:/mnt/data/node3 node4:/mnt/data/node4

# gluster volume info

Volume Name: linux-vs
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: node1:/mnt/data/node1
Brick2: node2:/mnt/data/node2
Brick3: node3:/mnt/data/node3
Brick4: node4:/mnt/data/node4

# gluster volume start linux-vs

Now our gluster storage server ready now we can mount it on client side.
Information about peer,node,vols available

# cd /var/lib/glusterd

Client side.

# showmount –e node1

It will show you volume name which we shared from gluster storage servers.

# mkdir /storage

# mount.glusterfs node1:/linux-vs  /storage 

# touch /storage/file{1,2,3,4}

Now the file will available on node1,node2,node3,node4
because it is distributed storage volume. 

Soon we publish the migration and managing the Storage volumes. 
thanks



Saturday, April 20, 2013

Nested Enterprise Virtualization with OVirt 3.2

In the virtualization field, It's time to add an another virtualization layer to run Vm's into Vm's. 
And that was very awaited project. and finally fulfilled by oVirt 3.2 It's really A good achievement from Open source to create a great Cloud. Here are the steps, How to enable nested Enterprise Virtualization with oVirt 3.2

Steps to enable Nested Virtualization in ovirt 3.2:- 
1. Obviously the first step is to install oVirt Engine 3.2 on Fedora 18 thru the official oVirt repositories. http://ovirt.org/releases/ovirt-release-fedora.noarch.rpm

2. Then we add a Fedora 18 node here on this node install the vdsm daemon, joined by the vdsm-hook-nestedvt.noarch package which is actually the one that makes this little black magic trick work.


3. Don't forget to enable the KVM nested virtualization.

4. How to enable nested Virualization.

# echo “options kvm-intel nested=1 “/etc/modprobe.d/kvm-intel.conf
modprobe -r kvm-intel 
modprobe kvm-intel nested=1 

Here is the list of  packages involved in this experiment: 

Kernel: 3.7.9 – 201.fc18.x86_64

Libvirt: libvirt-0.10.2.3-1.fc18
Vdsm: vdsm-4.10.3-8.fc18
Vdsm-hook-nestedvt.noarch-4.10.3-8.fc18



Image of Nested Virtualization..