Skip to main content

RHEV vs ESXi

Red Hat RHEV vs Vmware ESX

In 2009, Red Hat launched Red Hat enterprise Virtualization (RHEV) to compete in commercial virtualization market dominated by VMware. RHEV has two components: Red Hat enterprise Virtualization manager(RHEV-M) and managed hypervisor,which could be RHEV-H(RHEV hypervisor, a trim down version of RHEL) or full-blown RHEL 5.5 (64bit) or newer
Feature wise, in paper, RHEV looks not too bad, However what will be revealed if dug further into technical details and compared with VMware?

RHEV 2.2 ESX 4
Manager

Name RHEV-M vCenter
Compatible OS Windows 2003
Windows 20008 R2
Windows XP
Windows 2003
Windows 2008
Windows 2008 R2
Backend DB Microsoft SQL Server Microsoft SQL server
Oracle
Application Type Web application
(WPF .xbap application)
Windows native application
User Interface Web UI Web UI
Windows native application
CLI [1] Powershell Powershell(PowerCLI)
vCLI
SDK&API Powershell Powershell, Perl,C#, Java
Hypervisor

Type Linux kernel (KVM) Proprietary
Manager Agent Python script Binary daemon
HA/Migration [2] YES YES
Manager independent [3] NO YES
CLI [4] NO esxcfg-*/vimsh commands
SDK&API NO Powershell, Perl,C#, Java
Storage Type [5] NFS/iSCSI/FC local disk/NFS/iSCSI/FC
Guest OS

supported OS [6] Red Hat Enterprise Linux
Windows
All major Linux distributions
Windows
Solaris
Mac OS/BSD
Clone [7] Supported supported
Snapshot [8] limited support supported
Supported Hard disk [9] IDE, VirtIO IDE,SCSI
Cost ~2/3 of VMware cost expensive


NOTES:
[1] Manager CLI: RHEV-M PowerShell has fewer number of cmdlets compared to PowerCLI

[2] Manager independent: In my opinion, it is RHEV’s biggest mistake in design. RHEV-M is the central brain, the hypervisor is dummy host, which means you are NOT supposed to login to hypervisor to do configuration or VM operation, e.g. add virtual network or start/stop vms. All must be done in RHEV-M. On the other hand, each VMware ESX host is intelligent by design, you can perform almost anything by esxcfg*/vimsh commands. ESX host just rely manager for HA and Distributed Resource Scheduling.(if RHEV-M fails, VMs in RHEV-H will not be interrupted, but don’t touch them, because you can’t restart them without RHEV-M)

[3] Hypervisor HA: RHEV requires a form of fencing method for HA, e.g smart power switch or LOM card to shoot hypervisor in the head.

[4] Hypervisor CLI: libvirt CLI tools are supported in KVM, but RHEV doesn’t use libvirt.

[5] Storage Type: You can’t utilize RHEV-H local storage, it is not visible in manager.RHEV datacenter has a "storage type" (NFS/iSCSI/FC) attribute, only single storage domain with the same type can be attached to datacenter.

[6] Supported guest OS: In paper, RHEL and Windows are the only supported OS, but you can install almost any x86 OS, because RHEV-H is based on KVM not para-virtualization

[7] Clone: RHEV doesn’t call it clone, You have to choose a template when creating new VM. VMware support clone from template or VM.

[8] Snapshot: You have to shutdown RHEV VM to snapshot it.

[9] VirtIO: RHEL 5.x has built-in VirtIO driver, Other Linux should also has VirtIO driver. for windows, RHEV provide Virtual floppy file, virtio*.vfd, to be used during installation. Any other OS without VirtIO has to use IDE (SCSI is not supported, VirtIO is supposed to deliver better performance than SCSI)

Conclusion:
In my opinion, so far, RHEV Server is not enterprise ready due to limitations of [3] , [4], and [8]. RHEV Server lose to VMware ESX in almost every feature compared, However, RHEV does a better job in desktop virtualization thanks to Qumranet, whose root was desktop virtualization. (In 2008, Red Hat acquired Qumranet, from which the RHEV-M originated).

It is reported that Red Hat is developing RHEV 3, which will be based on Jboss (Java) in Linux with PostgreSQL DB backend. Hopefully, RHEV 3 can redesign RHEV-H to make it “intelligent” by integrating libvirt for CLI ability in hypervisor.

Comments

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image downloaded we can start a contai

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --now cockpit.socket If firewall is runnin

Containers Without Docker on RHEL/Fedora

Docker is perfectly doing well with the containerization. Since docker uses the Server/Client architecture to run the containers. So, even if I am a client or developer who just wants to create a docker image from Dockerfile I need to start the docker daemon which of course generates some extra overhead on the machine.  Also, a daemon that needs to run on your system, and it needs to run with root privileges which might have certain security implications. Here now the solution is available where we do not need to start the daemon to create the containers. We can create the images and push them any of the repositories and images are fully compatible to run on any of the environment.  Podman is an open-source Linux tool for working with containers. That includes containers in registries such as docker.io and quay.io. let's start with the podman to manage the containers.  Install the package  [root@rhel8 ~] # dnf install podman -y  OR [root@rhel8 ~] # yum