To setup kubernetes we need two servers running at least for containers hosting. And one server will be acting as master. In my setup I am going to use three servers as follows.
master.example.com ( kubernetes master/controller )
node1.example.com ( kubernetes minion/client or docker host )
node2.example.com ( kuberntes minion/client or docker host )
For kubernetes cluster we will be using below details.
1. Infrastructure Private subnet IP range: 172.25.0.0/16
2. Flannel subnet IP range: 172.30.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)
3. Service Cluster IP range for Kubernetes: 10.254.0.0/16 (You can choose any IP range just make sure it does not overlap with any other IP range)
4. Kubernetes Service IP: 10.254.0.1 (First IP from service cluster IP range is always allocated to Kubernetes Service)
5. DNS service IP: 10.254.3.100 (You can use any IP from the service cluster IP range just make sure that the IP is not allocated to any other service)
It's better to communicate with all machines using names. To do so I am doing local mapping since I am not using DNS here.
[root@node-XX ~]# vim /etc/hosts
x.x.x.x master.example.com master # My master ip is: 10.10.1.128
y.y.y.y node1.example.com node1
z.z.z.z node2.example.com node2
:wq
[root@node-XX ~]# vim /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
:wq
Now we can install the required packages on all the machines.
[root@node-XX ~]# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel
Below is the common configuration for all the nodes.
[root@XX ~]# vim /etc/kubernetes/config
# Comma seperated list of nodes running etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.1.128:2379"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.10.1.128:8080"
:wq
################# Configuring etcd server #########################
In this case I am configuring ETCD server on master machine, In production environment might be we already have this server configured.
Configuration on master machine only.
[root@master ~]# vim /etc/etcd/etcd.conf
Make sure following lines should be uncommented.
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
:wq
################ API Server Configuration (On Master) #############
API Server handles the REST operations and acts as a front-end to the cluster’s shared state. API Server Configuration is stored at /etc/kubernetes/apiserver. Kubernetes uses certificates to authenticate API request. Before configuring API server, we need to generate certificates that can be used for authentication.
Kubernetes provides ready made scripts for generating these certificates.
First of all, We can get these scripts to create the certs.
[root@master ~]# git clone https://github.com/vchauhan1/kubernetes.git
[root@master ~]# cd kubernetes/
[root@master ~]# bash make-ca-cert.sh "10.10.1.128" "IP:10.10.1.128,IP:10.254.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"
All the certs will be generated in "/srv/kubernetes" directory. Now we can configure API server.
[root@master ~]# vim /etc/kubernetes/apiserver
We need uncomment or add the following lines to work with tls connection
KUBE_API_ADDRESS="--address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://10.10.1.128:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS="--client-ca-file=/srv/kubernetes/ca.crt --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key"
:wq
###################### Controller Manager Configuration #####################
[root@master ~]# vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--root-ca-file=/srv/kubernetes/ca.crt --service-account-private-key-file=/srv/kubernetes/server.key"
:wq
Let's start the ETCD server and make the flanneld network entries.
[root@master ~]# systemctl start etcd.service
[root@master ~]# systemctl enable etcd.service
Create a new key in etcd to store Flannel configuration using the following command:
[root@master ~]# etcdctl mkdir /kube-centos/network
We need to define the flanneld network.
[root@master ~]# etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"
Verify the key which we have created.
[root@master ~]# etcdctl get /kube-centos/network/config
Now we can Start the services on master machines.
[root@master ~]# systemctl enable kube-apiserver
[root@master ~]# systemctl start kube-apiserver
[root@master ~]# systemctl enable kube-controller-manager
[root@master ~]# systemctl start kube-controller-manager
[root@master ~]# systemctl start kube-scheduler
[root@master ~]# systemctl start kube-scheduler
[root@master ~]# systemctl enable flanneld
[root@master ~]# systemctl start flanneld
Kubelet Configuration (On Minions)
Kubelet is a node/minion agent that runs pods and make sure that it is healthy. It also communicates pod details to Kubernetes Master. Kubelet configuration is stored in /etc/kubernetes/kubelet
[root@node1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.10.1.129"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.1.128:8080"
# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
:wq
###################Node2 Configurations##################
############### node2 configuration #############
[root@node2 ~]# vim /etc/kubernetes/kubelet
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.10.1.130"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.10.1.128:8080"
# pod infrastructure container
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
:wq
Now we can configure the flanneld on all the nodes.
[root@node-XX ~]# vim /etc/sysconfig/flanneld
# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://10.10.1.128:2379"
# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
:wq
Start the services on all the minions.
[root@nodeXX ~]# systemctl enable kube-proxy
[root@nodeXX ~]# systemctl start kube-proxy
[root@nodeXX ~]# systemctl enable kubelet
[root@nodeXX ~]# systemctl start kubelet
[root@nodeXX ~]# systemctl enable flanneld
[root@nodeXX ~]# systemctl start flanneld
[root@nodeXX ~]# systemctl enable docker
[root@nodeXX ~]# systemctl start docker
Now we can verify the nodes. All nodes must be listed, since we have not started kubelet service on master so master will not be in the list.
[root@master ~]# kubectl get nodes
One more verification which we can do is network interface of flanneld must be available on all the nodes.
Further we can configure. kubernetes add-ons as well. which we see in the next post.
Source of information is: tothenew.com
This is really a great work. I am very glad to have this data. Good work
ReplyDeletePlease check this Subway Street Run
Thanks