Skip to main content

Posts

Showing posts from 2015

VMware Workstation 12 Working with Fedora 23 !!

As the following instruction we can start VMware workstation 12/12.1 On the Fedora 23. Login as root first. $ sudo su - Force rebuild of VMWare modules: # vmware-modconfig --console --install-all Replace the vmware glib version with the fedora version: # cd /usr/lib/vmware/lib # for mylib in $(ls /usr/lib64/*4600*); do /bin/cp -afv $mylib $(basename $mylib.4600.1 )/$(basename $mylib .4600.1 ); done This basically does the following: # pwd /usr/lib/vmware/lib # /bin/cp -afv /usr/lib64/libgio-2.0.so.0.4600.1 libgio-2.0.so.0/libgio-2.0.so.0 # /bin/cp -afv /usr/lib64/libglib-2.0.so.0.4600.1 libglib-2.0.so.0/libglib-2.0.so.0 # /bin/cp -afv /usr/lib64/libgmodule-2.0.so.0.4600.1 libgmodule-2.0.so.0/libgmodule-2.0.so.0 # /bin/cp -afv /usr/lib64/libgobject-2.0.so.0.4600.1 libgobject-2.0.so.0/libgobject-2.0.so.0 # /bin/cp -afv /usr/lib64/libgthread-2.0.so.0.4600.1 libgthread-2.0.so.0/libgthread-2.0.so.0 Start VMware or VMplayer: $ VMWARE_USE_SHIPPED_LIBS=force vmware or: $ V...

What is a Package Collection

What is a Package Collection in Puppet Version 4? The Puppet ecosystem contains many tightly related and dependent packages. Pup pet, Facter, MCollective, and the Ruby interpreter are all tightly related dependencies.  The Puppet agent, Puppet server, and PuppetDB are self-standing but interdependent  applications. Production Puppet environments have been struggling with two conflicting needs: • It is important to stay up to date with the latest improvements and security fixes. • Improvements and upgrades in an application would sometimes introduce probl ems for interdependent components of the Puppet ecosystem.   Puppet Labs has chosen to address these concerns with two related changes.  Puppet and all core dependencies are shipped together in a single package.  This change reduces the need to ensure compatibility across a wide variety of  versions of dependencies. It also ensures that modern versions of Ruby are avail able on every supported op...

Installing Puppet 4 On RHEL/CentOS-7

Installing Puppet Open Source 4 Pre-Install Checks: 1. Decide on a Deployment Type: Because Puppet can run in Mater/Agent(Server/Client) mode and also in stand-alone mode, So we need to decide which type of installation we are going to install. Accordingly we need to choose the packages. 2. Hardware Requirements: I. The Puppet agent service has no particular hardware requirements and can run on nearly anything. II. At minimum, Puppet master server should have two processor cores and at least 1 GB RAM. III. To comfortably serve at least 1000 nodes, it should have 2-4 processor cores and at least 4 GB RAM. 3. Network Configuration. I. In an agent/master deployment, we must prepare our network for Puppet’s traffic. Firewall : 8140 port Name Resolution : Every node must have a unique hostname. Forward and reverse DNS must both be configured correctly. Or we can create entries in /etc/hosts file. ...

Remote Docker Host Using Docker Client

Connecting remote docker host using docker client In previous post ( here ) we have seen, by default docker is going to start within host using UNIX socket  unix:///var/run/docer.sock.  At this time we can only manage docker from the local machine, if we want to accept the connection requests from a remote client we need to start the docker daemon on remote port. So here we are going to setup the remote port. First we need to stop the existing socket, we can stop it by using the following comamnd: # systemctl stop docker Some time socket not get closed by stopping the service, so we can remove the socket manually as well. # rm -r /var/run/docker.sock Now we can start it on a specific port by using following command: # docker -H tcp://0.0.0.0:5050 -H unix:///var/run/docker.sock -d & Now we can verify it using following command: # netstat -tupnl | grep 5050...

LINUX CONTAINERS

LINUX CONTAINERS What is Linux Container:  Linux containers have different approach than the Virtualization technology. Simply we can say this is OS level Virtualization, which means all containers run on top of one linux operating system.  We can start containers on a hardware running machine or inside of running virtual machine. Each container run's as a fully isolated operating sysem. In container virtualization rather than having an entire Operating System guest OS, containers isolate the guest but do not virtualize the hardware. For running containers one needs a patched kernel and user tools, the kernel provides process isolation and performs resource management. Thus all containers are running under the same kernel but they still have their own file system, processes, memory etc. Linux based containers mainly involved with two concepts: 1. Namespaces 2. Cgroups ( Controll Groups) There are total 6 types of Namespaces: 1. PID Na...

Configuring The TigerVNC Server In Fedora 21/RHEL7/CentOS7

Introduction:  TigerVNC (Tiger Virtual Network Computing) is a system for graphical desktop sharing which allows you to remotely control other computers. TigerVNC works on the client-server network. A server shares its output (vncserver) and a client (vncviewer) connects to the server.  1. Installing VNC Server ~]# yum install tigervnc-server Now we need to copy the configure the configuration file as following: ~]# cp /lib/systemd/system/vncserver@.service /etc/systemd/system/vncserver@.service Now need to edit the file, and here we can mention the name of the user which we want to allow for desktop sharing: ~]# vim /etc/systemd/system/vncserver@.service replace USER with the actual user name. Leave the remaining lines of the file unmodified. Following is the actual line in the file: ExecStart=/sbin/runuser -l USER -c "/usr/bin/vncserver %i -geometry 1280x1024" PIDFile=/home/USER/.vnc/%H%i.pid Here we need t...