Friday, April 14, 2017

Setting up DNS service Add-On in kubernetes

Setting up DNS service Add-On in kubernetes


What things get DNS names?
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:
Assume a Service named “my-service in the Kubernetes namespace dev. A Pod running in namespace dev can look up this service by simply doing a DNS query for my-service. A Pod running in namespace can look up this service by doing a DNS query for my-service.dev.

Kubernetes offers a cluster addon for DNS service discovery, which most environments enable by default. “SkyDNS” seems to be the standard DNS server of choice, since it was designed to work on top of etcd. The “kube-dns” addon is composed of a kubernetes service which, like all services, is allocated an arbitrary VIP within the preconfigured subnet (this is the IP that every other service will use for DNS); and a replication controller that will manage pods with the following containers inside them:

  1. A local etcd instance
  2. The SkyDNS server
  3. A process called kube2sky which binds SkyDNS to the kubernetes cluster
  4. A health check called healthz that monitors how DNS is being resolve


DNS IP: 10.254.0.10 We can choose an IP from cluster service range which should not allocated to any other service. 
Domain Name: kubernetes.local Defined Domain name to use.

In order to set everything up, we need to retrieve the definition files for the service and replication controller, like the following:
Note: Change the Red Marked settings according to your setup. 

[root@kube-master ~]# wget https://gist.githubusercontent.com/jamiehannaford/850900e2d721a973bc6d/raw/710eade5b8d5a382cdc6d605d6cd2d43fb0c20fb/skydns-rc.yml

[root@kube-master ~]# wget https://gist.githubusercontent.com/jamiehannaford/b80465bf7d427b949542/raw/75e7c0ff3fc740ea0f4eb54e5d10753cccf1267b/skydns-svc.yml
Now we need to setup “MASTER-IP” and Domain Name in the sysdns-rc.yaml file.
[root@kube-master ~]# vim skydns-rc.yaml
Line no. 51
- -domain=kubernetes.local 
- -kube_master_url=http://10.10.1.136:8080
Line No. 62
- -domain=kubernetes.local → Your Domain Name
- -cmd=nslookup kubernetes.default.svc.kubernetes.local localhost >/dev/null

Next we need to change the DNS server ip in skydns-svc.yaml file as follows.
[root@kube-master ~]# vim skydns-svc.yaml
Line No. 31
clusterIP: 10.254.0.10

Now we can define the service and replication controller.
[root@kube-master ~]# kubectl create -f skydns-rc.yaml

[root@kube-master ~]# kubectl create -f skydns-svc.yaml

This will create a replication controller and service under the kube-system namespace. To check their status, run:
[root@kube-master ~]# kubectl get pods --namespace=kube-system

 
[root@kube-master ~]# kubectl get services --namespace=kube-system
Once our pod is completely up-and-running, we will need to pass in the DNS server IP
and domain to all of the kubelet agents running on our minion hosts. 

To do this, we will likely need to change the config files on our  minions. 
we will need to add the following flags.

[root@minion1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.0.10 –cluster_domain=kubernetes.local"
[root@minion1 ~]# systemctl restart kubelet 

We have done with the dns settings. To test DNS functions we can start one small 
pod based on busybox Image as follows.

[root@kube-master ~]# vim /root/busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

Now we can create pod using the above yaml file.
[root@kube-master ~]# kubectl create -f busybox.yaml
[root@kube-master ~]# kubectl exec busybox -- nslookup kubernetes
We can substitute kubernetes any service name that is currently running, and it will
resolve to the IP of a pod that the service ordinarily directs to.
That’s all about the DNS Add-on in Kubernetes. 



Monday, March 20, 2017

Docker Private Registry with docker-distribution

Docker Private Registry with docker-distribution 

Docker uses docker hub registry, or some other provided by Linux vendor . If you do not want to use docker hub, and you use Linux version which is not officially vendor supported , then we can create your own docker registry and push images there and thus have more control over it. 
Other reason for own/private docker registry can be that you have private / classified docker images ( Ex: Private image for banking system, Web Server, Database server, etc..) which we want to keep "in house" without exposing them to third party locations.
v2 Docker registry main advantage over docker registry v1 is better API feature set and it is worth to invest time to learn how to deploy it. This post is short to write now about all docker registry v2 APIs and I recommend to read about API features Docker Registry HTTP API V2
In order use local docker registry, we have to install and configure it and afterwards be able to push images to it.
In process below I am going to describe docker registry process setup, and I am going to use CentOS 7 as operating system. 

[root@host1 ~]# rpm -qi docker-distribution
Name        : docker-distribution
Version     : 2.6.0
Release     : 1.el7
Architecture: x86_64
Install Date: Mon 20 Mar 2017 03:37:00 PM IST
Group       : Unspecified
Size        : 12796719
License     : ASL 2.0
Signature   : RSA/SHA256, Tue 07 Mar 2017 04:56:39 PM IST, Key ID 24c6a8a7f4a80eb5
Source RPM  : docker-distribution-2.6.0-1.el7.src.rpm
Build Date  : Tue 07 Mar 2017 05:39:16 AM IST
Build Host  : c1bm.rdu2.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem 
Vendor      : CentOS
URL         : https://github.com/docker/distribution
Summary     : Docker toolset to pack, ship, store, and deliver content
Description :
Docker toolset to pack, ship, store, and deliver content
[root@host1 ~]# rpm -ql docker-distribution
/etc/docker-distribution/registry/config.yml
/usr/bin/registry
/usr/lib/systemd/system/docker-distribution.service
Here specifically, we need to have a look on the systemd unit file, Unit file starts the service based on a configuration file, following is the configuration file of docker-distribution which we can edit according to our specifications. 
[root@host1 ~]# cat /etc/docker-distribution/registry/config.yml 
version: 0.1
log:
  fields:
    service: registry
storage:
    cache:
        layerinfo: inmemory
    filesystem:
        rootdirectory: /var/lib/registry
http:
    addr: 10.10.1.131:5000                      --> My Docker host ip
    net: tcp
    host: https://host1.example.com:5000        --> My Docker hosts hostname
    secret: techvalb
    tls:
        certificate: /etc/certs/host1.crt
        key: /etc/certs/host1.key
auth: 
    htpasswd:
        realm: example.com
        path: /etc/certs/.dockerpasswd
[root@host1 ~]# mkdir -p /etc/certs
[root@host1 ~]# cd /etc/certs/
[root@host1 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout host1.key -x509 -days 365 -out host1.crt
[root@host1 certs]# htpasswd  -c -B .dockerpasswd techvlab
[root@host1 certs]# systemctl restart docker.service 
[root@host1 certs]# systemctl restart docker-distribution.service 
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab
Password: 
Error response from daemon: Get https://host1.example.com:5000/v1/users/: x509: certificate is valid for host1.example.com, not host1.example.com
[root@host2 ~]# 

Because our certificate is the self sign certificate, so explicitly we need to accept that certificate at the os layer. Simply we can copy the certificate to following location. 
[root@host1 certs]# scp /etc/certs/host1.crt host2:/etc/pki/ca-trust/source/anchors/host1.crt 
root@host2's password: 
host1.crt                         100% 2171     2.1KB/s   00:00    
[root@host1 certs]# 
Now switch to docker client, and communicate with docker repository
[root@host2 ~]# update-ca-trust enable
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab 
Password: 
Login Succeeded
[root@host2 ~]# docker tag 6b914bbcb89e host1.example.com:5000/mynewimage 
[root@host2 ~]# docker push host1.example.com:5000/mynewimage
That's how we can setup our own repository, Next post we will see, How we can setup Docker Swarm Mode. 




Thursday, March 16, 2017

NetApp Plugin for Docker Volume

NetApp Plugin for Docker Volume Management

A data volume is a specially-designated directory that by-passes storage driver management. Data volumes persist data independent of a container's life cycle. When you delete a container, the Docker daemon does not delete any data volumes. You can share volumes across multiple containers. Moreover, you can share data volumes with other computing resources in your system.

We can connect Enterprise grade storage system with docker host, So we can directly create volume from docker host to connect with containers.

Here we are going to see how we can integrate NetApp DATA ONTAP with docker using NFS.


NetApp Administration:
1. Create one SVM(Storage Virtual Machine) with Management and DATA lifs and enable NFS protocol. Like follow.




Plugin Installation on Docker Host:
1. We need to install NetApp plugin on docker host, that we can do using following steps.

[root@server1 ~]# mkdir /etc/netappdvp
[root@server1 ~]# vim /etc/netappdvp/config.json
{
    "version": 1,
    "storageDriverName": "ontap-nas",
    "managementLIF": "192.168.0.191",
    "dataLIF": "192.168.0.192",
    "svm": "svm_nfs",
    "username": "vsadmin",
    "password": "techvlab@123",
    "aggregate": "aggr1"
}
[root@server1 ~]# docker plugin install store/netapp/ndvp-plugin:1.4.0
[root@server1 ~]# docker plugin ls  => Netapp plugin will be listed here
ID                  NAME                             DESCRIPTION                          ENABLED
08d918a5f547        store/netapp/ndvp-plugin:1.4.0   nDVP - NetApp Docker Volume Plugin   true
[root@server1 ~]# docker volume create -d 08d918a5f547 --name ndvp_1
ndvp_1
[root@server1 ~]# docker volume ls
DRIVER                           VOLUME NAME
store/netapp/ndvp-plugin:1.4.0   ndvp_1
store/netapp/ndvp-plugin:1.4.0   test

We can verify the volumes at ONTAP now.