Skip to main content

Kubernetes Configuration on Centos or RHEL 7

To setup kubernetes we need two servers running at least for containers hosting. And one server will be acting as master. In my setup I am going to use three servers as follows.

master.example.com ( kubernetes master/controller )
node1.example.com ( kubernetes minion/client or docker host )
node2.example.com ( kuberntes minion/client or docker host )

For kubernetes cluster we will be using below details.

1. Infrastructure Private subnet IP range: 172.25.0.0/16

2. Flannel subnet IP range: 172.30.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)

3. Service Cluster IP range for Kubernetes: 10.254.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)

4. Kubernetes Service IP: 10.254.0.1 (First IP from service cluster IP range is always allocated to Kubernetes Service)
5. DNS service IP: 10.254.3.100 (You can use any IP from the service cluster IP range just make sure that the IP is not allocated to any other service)

It's better to communicate with all machines using names. To do so I am doing local mapping since I am not using DNS here.


[root@node-XX ~]# vim /etc/hosts
x.x.x.x master.example.com master  # My master ip is: 10.10.1.128
y.y.y.y node1.example.com node1
z.z.z.z node2.example.com node2
:wq

Now we need to setup the repository first, and should be replicated to all host hosts in the cluster.


[root@node-XX ~]# vim /etc/yum.repos.d/virt7-docker-common-release.repo

[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
:wq

Now we can install the required packages on all the machines.


[root@node-XX ~]# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Below is the common configuration for all the nodes.


[root@XX ~]# vim /etc/kubernetes/config
# Comma seperated list of nodes running etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.1.128:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.10.1.128:8080"
:wq

################# Configuring etcd server #########################
In this case I am configuring ETCD server on master machine, In production environment might be we already have this server configured.

Configuration on master machine only.


[root@master ~]# vim /etc/etcd/etcd.conf
Make sure following lines should be uncommented.
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
:wq

################ API Server Configuration (On Master) #############
API Server handles the REST operations and acts as a front-end to the cluster’s shared state. API Server Configuration is stored at /etc/kubernetes/apiserver. Kubernetes uses certificates to authenticate API request. Before configuring API server, we need to generate certificates that can be used for authentication.
Kubernetes provides ready made scripts for generating these certificates.
First of all, We can get these scripts to create the certs.



[root@master ~]# git clone https://github.com/vchauhan1/kubernetes.git


[root@master ~]# cd kubernetes/


[root@master ~]# bash make-ca-cert.sh "10.10.1.128" "IP:10.10.1.128,IP:10.254.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"

All the certs will be generated in "/srv/kubernetes" directory. Now we can configure API server.


[root@master ~]# vim /etc/kubernetes/apiserver
We need uncomment or add the following lines to work with tls connection

KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.1.128:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key"
:wq

###################### Controller Manager Configuration #####################


[root@master ~]# vim /etc/kubernetes/controller-manager

KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key"

:wq

Let's start the ETCD server and make the flanneld network entries.


[root@master ~]# systemctl start etcd.service
[root@master ~]# systemctl enable etcd.service


Create a new key in etcd to store Flannel configuration using the following command:
[root@master ~]# etcdctl mkdir /kube-centos/network

We need to define the flanneld network.
[root@master ~]# etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

Verify the key which we have created.
[root@master ~]# etcdctl get /kube-centos/network/config

Now we can Start the services on master machines.


[root@master ~]# systemctl enable kube-apiserver
[root@master ~]# systemctl start kube-apiserver
[root@master ~]# systemctl enable kube-controller-manager
[root@master ~]# systemctl start kube-controller-manager
[root@master ~]# systemctl start kube-scheduler
[root@master ~]# systemctl start kube-scheduler
[root@master ~]# systemctl enable flanneld
[root@master ~]# systemctl start flanneld

Kubelet Configuration (On Minions)
Kubelet is a node/minion agent that runs pods and make sure that it is healthy. It also communicates pod details to Kubernetes Master. Kubelet configuration is stored in /etc/kubernetes/kubelet


[root@node1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.10.1.129"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.1.128:8080"

# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
:wq

###################Node2 Configurations################## 


############### node2 configuration #############
[root@node2 ~]# vim /etc/kubernetes/kubelet
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.10.1.130"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.1.128:8080"

# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
:wq

Now we can configure the flanneld on all the nodes.


[root@node-XX ~]# vim /etc/sysconfig/flanneld
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.10.1.128:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
:wq

Start the services on all the minions. 


[root@nodeXX ~]# systemctl enable kube-proxy
[root@nodeXX ~]# systemctl start kube-proxy
[root@nodeXX ~]# systemctl enable kubelet
[root@nodeXX ~]# systemctl start kubelet
[root@nodeXX ~]# systemctl enable flanneld
[root@nodeXX ~]# systemctl start flanneld
[root@nodeXX ~]# systemctl enable docker
[root@nodeXX ~]# systemctl start docker

Now we can verify the nodes. All nodes must be listed, since we have not started kubelet service on master so master will not be in the list. 

[root@master ~]# kubectl get nodes

One more verification which we can do is network interface of flanneld must be available on all the nodes. 


Further we can configure. kubernetes add-ons as well. which we see in the next post. 

Source of information is: tothenew.com 



Comments

  1. This is really a great work. I am very glad to have this data. Good work
    Please check this Subway Street Run
    Thanks

    ReplyDelete

Post a Comment

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image download...

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --n...

Add The Group Information IN Yum Repository in simple Two steps

= Yum groups and repositories = Yum supports the group commands   * grouplist   * groupinfo   * groupinstall   * groupremove   * groupupdate Groups are read from the "group" xml metadata that is optionally available from each repository. If yum has no repositories which support groups then none of  the group operations will work.  #yum grouplist    This will list the installed and available groups for your system in two    separate lists. If you pass the optional 'hidden' argument then all of     the groups which are set to 'no' in the group xml tag.   yum groupinfo groupname     This will give you detailed information for each group including:   description, mandatory, default and optional packages.       #yum groupinstall groupname      #yum groupupdate groupname   Despite their differing names both of these commands perform the same   func...