Tuesday, January 24, 2012

RedHat Cluster Managment

RedHat Cluster Suite And Conga - Linux Clustering

This how to describes an easy step by step installation of the RedHat Cluster Suite on three CentOS nodes and prepare them as nodes of a cluster. You will also install the Management suite which is web based and is known as Conga.

You will use three nodes to form the cluster and one node as the cluster management node and as a cluster node it will not take part. All the nodes and the management node should be resolvable either by host file entries or by DNS.

Cluster Nodes:

cnode1:
eth0-192.168.2.151/24 - external-lan
eth1-192.168.1.200/26 - internal-lan cluster
cnode2:
eth0-192.168.2.152/24 - external-lan
eth1-192.168.1.201/26 - internal-lan cluster
cnode3:
eth0-192.168.2.153/24 - external-lan
eth1-192.168.1.202/26 - internal-lan cluster

Cluster Management Node:

centos:
eth0-192.168.2.150/24

As the cluster, its management interface and the service deamons use tcp, for the purpose of this article you can disable the firewalls at these nodes.

OS - All Nodes:
CentOS 6 Minimal

Cluster Nodes - Software Installation:

yum groupinstall "High Availability"
yum install ricci

Cluster Management Node - Software Installation:

yum groupinstall "High Availability Management"
yum install ricci

Copy this initial sample cluster config file into /etc/cluster/cluster.conf at all the nodes cnode1, cnode2, cnode3.








This initial file states that the cluster name is cl1 and defines the cluster nodes.

Now some services have to be configured and started at the nodes first and then at the management node as below.

Cluster Nodes:

chkconfig iptables off
chkconfig ip6tables off
chkconfig ricci on
chkconfig cman on
chkconfig rgmanager on
chkconfig modclusterd on

Create a password for the ricci service user with

passwd ricci

service iptables stop
service ip6tables stop
service ricci start
service cman start
service rgmanager start
service modclusterd start

Cluster Management Node:

chkconfig iptables off
chkconfig ip6tables off
chkconfig luci on
chkconfig ricci on

service iptables stop
service ip6tables stop
service luci start
service ricci start

luci service is the management service that presents the web based cluster interface via https at port 8084 and can be accessed in any browser at
https:///

ricci service is the underlying daemon that helps in cluster configuration sync and file copy, service start, stop etc. and uses tcp port 11111.

cman, rgmanager and modclusterd are the actual cluster services which futher start other services that actually make the clustering happen and keep it live.

Open a browser and enter the conga node url which in tis case is https://centos:8084/

After clicking 'ok' to the initial warning information you will be presented with the login screen. Enter the root user and root password of that system and start the interface.

Now click Add cluster and add the first node cnode1 and the ricci password, click 'ok' and it will detect the other two nodes also, add the ricci passwords and the cluster will be added to the Cluster Management interface. The cluster can be managed and configured from this interface. Care should be taken as the cluster.conf file sometimes does not get synced to all cluster nodes, they will get fenced due to version misconfiguration. At such times copy the cluster.conf file from node1 to all the other nodes. If all the nodes are in sync then the uptime is shown in the cluster nodes list.

Getting a cluster, configuring and managing, your cluster is up, live and configured in no time and later other clusters can be added into this management interface for easy maintenance.

Sunday, January 22, 2012

squid block the websites

Squid content filtering: Block / download of music MP3, mpg, mpeg, exec files


Q. For security and to save bandwidth I would like to configure Squid proxy server such way that I do not want my users to download all of the following files:
MP3
MPEG
MPG
AVG
AVI
EXE

How do I configure squid content filtering?

A. You can use squid ACL (access control list) to block all these files easily.

How do I block music files using squid content filtering ACL?

First open squid.conf file /etc/squid/squid.conf:

# vi /etc/squid/squid.conf
Now add following lines to your squid ACL section:

acl blockfiles urlpath_regex "/etc/squid/blocks.files.acl"
You want display custom error message when a file is blocked:
# Deny all blocked extension
deny_info ERR_BLOCKED_FILES blockfiles
http_access deny blockfiles

Save and close the file.

Create custom error message HTML file called ERR_BLOCKED_FILES in /etc/squid/error/ directory or /usr/share/squid/errors/English directory.
# vi ERR_BLOCKED_FILES
Append following content:



</span>ERROR<span style="color: rgb(153, 0, 0);">:</span> Blocked <span style="font-weight: bold;"><span style="color: rgb(0, 0, 255);">file</span></span> content<span style="color: rgb(255, 0, 0);">


File is blocked due to new IT policy


Please contact helpdesk for more information:
Phone: 555-12435 (ext 44)

Email: helpdesk@yourcorp.com

Caution: Do not include HTML close tags as it will be closed by squid.
Now create /etc/squid/blocks.files.acl file:
# vi /etc/squid/blocks.files.acl
Append following text:
\.[Ee][Xx][Ee]$
\.[Aa][Vv][Ii]$
\.[Mm][Pp][Gg]$
\.[Mm][Pp][Ee][Gg]$
\.[Mm][Pp]3$

Save and close the file. Restart Squid:
# /etc/init.d/squid restart

Squid in action:

Squid content filtering howto
(Click to enlarge)

Thursday, January 12, 2012

ESX And ESXi Compare with Other vendor

ESXi and ESX Architectures Compared

VMware ESX Architecture. In the original ESX architecture, the virtualization kernel (referred to as the vmkernel) is augmented with a management partition known as the console operating system (also known as COS or service console). The primary purpose of the Console OS is to provide a management interface into the host. Various VMware management agents are deployed in the Console OS, along with other infrastructure service agents (e.g. name service, time service, logging, etc). In this architecture, many customers deploy other agents from 3rd parties to provide particular functionality, such as hardware monitoring and system management. Furthermore, individual admin users log into the Console OS to run configuration and diagnostic commands and scripts.

VMware ESXi Architecture. In the ESXi architecture, the Console OS has been removed and all of the VMware agents run directly on the vmkernel. Infrastructure services are provided natively through modules included with the vmkernel. Other authorized 3rd party modules , such as hardware drivers and hardware monitoring components, can run in vmkernel as well. Only modules that have been digitally signed by VMware are allowed on the system, creating a tightly locked-down architecture. Preventing arbitrary code from running on the ESXi host greatly improves the security of the system.

Architectures Compared

VMware ESX [~ 2 GB] VMware ESXi [< 150 MB]

  • VMware agents run in Console OS
  • Nearly all other management functionality provided by agents running in the Console OS
  • Users must log into Console OS in order to run commands for configuration and diagnostics

  • VMware agents ported to run directly on VMkernel
  • Authorized 3rd party modules can also run in Vmkernel. These provide specific functionality
    • Hardware monitoring
    • Hardware drivers
  • VMware components and third party components can be updated independently
  • The “dual-image” approach lets you revert to prior image if desired
  • Other capabilities necessary for integration into an enterprise datacenter are provided natively
  • No other arbitrary code is allowed on the system


Understand the Difference between ESX and ESXi

VMware ESXi is VMware’s most advanced hypervisor architecture. Learn about the differences with the previous generation architecture, VMware ESX:

Capability ESX 4.1 ESXi 4.1 ESXi 5.0
Service Console Present Removed Removed
Admin/config CLIs COS + vCLI PowerCLI + vCLI PowerCLI + vCLI (enhanced)
Advanced Troubleshooting COS Tech Support Mode ESXi Shell
Scripted Installation Supported Supported Supported
Boot from SAN Supported Supported Supported
SNMP Supported Supported (limited) Supported
Active Directory Integrated Integrated Integrated
HW Monitoring 3rd party agents in COS CIM providers CIM providers
Serial Port Connectivity Supported Not Supported Supported
Jumbo Frames Supported Supported Supported
Rapid deployment and central management of hosts via Auto Deploy Not Supported Not Supported Supported
Custom image creation and management Not Supported Not Supported Supported
Secure syslog Not Supported Not Supported Supported
Management interface firewall Supported Not Supported Supported



Compare ESXi to Other Vendors' Offerings

Hypervisor Attributes VMware ESXi 5.0
Windows Server 2008 R2 SP1 with Hyper-V Citrix XenServer 5.6 FP1
Small Disk Footprint

144 MB disk footprint
(VMware ESXi)

>3GB with Server Core installation

~10GB with full Windows Server installation

1GB
OS Independence

No reliance on general purpose operating system
(VMware ESXi)

Relies on Windows 2008 in Parent Partition

Relies on Linux in Dom0
management Partition
Hardened Drivers

Optimized with hardware vendors

Generic Windows drivers

Generic Linux Drivers
Advanced Memory Management

Ability to reclaim unused memory, de-duplicate memory pages, compress memory pages

Only uses ballooning. No ability to de-duplicate or compress pages.

Only uses ballooning. No ability to de-duplicate or compress pages. Does not adjust memory allocation based on VM usage.
Advanced Storage Management

Lacks an integrated cluster file system, no live storage migration

Lacks an integrated cluster file system, no live storage migration, storage features support very few arrays
High I/O Scalability

Direct driver model

I/O bottleneck in parent OS

I/O bottleneck in Dom0 management OS
Host Resource Management

Network traffic shaping, per-VM resource shares, set quality of service priorities for storage and network I/O

Lacks similar capabilities

Lacks similar capabilities
Performance Enhancements

AMD RVI, Intel EPT large memory pages, universal 32-way vSMP, VMI paravirtualization, VMDirectPath I/O, PV guest SCSI driver

Large memory pages,
4-way vSMP on Windows
2008 and Windows 7 VMs only

No large memory pages, no paravirt guest SCSI device, Requires inflexible SR-IOV
Virtual Security Technology

VMware VMsafe™
Enables hypervisor level security introspection

Nothing comparable

Nothing comparable
Flexible Resource Allocation

Hot add VM vCPUs and memory, VMFS volume grow, hot extend virtual disks, hot add virtual disks

Only hot add virtual disks

Nothing comparable
Custom image creation and management

VMware Image Builder allows administrators to create custom ESXi images for different types of deployment, such as ISO-based installation, PXE-based installation, and Auto Deploy.

Nothing comparable

Nothing comparable
Auto Deploy

vSphere Auto Deploy enables faster provisioning of multiple hosts. New hosts are automatically provisioned based on rules defined by user.

Requires in-depth setup in Systems Center Configuration Manager

Nothing comparable
Management Interface Firewall

ESXi Firewall is a service-oriented and stateless firewall that protects the ESXi 5.0 management interface. Configured using the vSphere Client or at the command line with esxcli interfaces.

Nothing comparable

Nothing comparable
Enhanced Virtual Hardware

32-way virtual SMP, 1TB virtual machine RAM, Non hardware accelerated 3D graphics, USB 3.0 device support, Unified Extended Firmware Interface (UEFI).

4-way virtual SMP only, 64 GB RAM per virtual machine

8-way virtual SMP only, 32 GB RAM per virtual machine