Monday, March 20, 2017

Docker Private Registry with docker-distribution

Docker Private Registry with docker-distribution 

Docker uses docker hub registry, or some other provided by Linux vendor . If you do not want to use docker hub, and you use Linux version which is not officially vendor supported , then we can create your own docker registry and push images there and thus have more control over it. 
Other reason for own/private docker registry can be that you have private / classified docker images ( Ex: Private image for banking system, Web Server, Database server, etc..) which we want to keep "in house" without exposing them to third party locations.
v2 Docker registry main advantage over docker registry v1 is better API feature set and it is worth to invest time to learn how to deploy it. This post is short to write now about all docker registry v2 APIs and I recommend to read about API features Docker Registry HTTP API V2
In order use local docker registry, we have to install and configure it and afterwards be able to push images to it.
In process below I am going to describe docker registry process setup, and I am going to use CentOS 7 as operating system. 

[root@host1 ~]# rpm -qi docker-distribution
Name        : docker-distribution
Version     : 2.6.0
Release     : 1.el7
Architecture: x86_64
Install Date: Mon 20 Mar 2017 03:37:00 PM IST
Group       : Unspecified
Size        : 12796719
License     : ASL 2.0
Signature   : RSA/SHA256, Tue 07 Mar 2017 04:56:39 PM IST, Key ID 24c6a8a7f4a80eb5
Source RPM  : docker-distribution-2.6.0-1.el7.src.rpm
Build Date  : Tue 07 Mar 2017 05:39:16 AM IST
Build Host  : c1bm.rdu2.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem 
Vendor      : CentOS
URL         : https://github.com/docker/distribution
Summary     : Docker toolset to pack, ship, store, and deliver content
Description :
Docker toolset to pack, ship, store, and deliver content
[root@host1 ~]# rpm -ql docker-distribution
/etc/docker-distribution/registry/config.yml
/usr/bin/registry
/usr/lib/systemd/system/docker-distribution.service
Here specifically, we need to have a look on the systemd unit file, Unit file starts the service based on a configuration file, following is the configuration file of docker-distribution which we can edit according to our specifications. 
[root@host1 ~]# cat /etc/docker-distribution/registry/config.yml 
version: 0.1
log:
  fields:
    service: registry
storage:
    cache:
        layerinfo: inmemory
    filesystem:
        rootdirectory: /var/lib/registry
http:
    addr: 10.10.1.131:5000                      --> My Docker host ip
    net: tcp
    host: https://host1.example.com:5000        --> My Docker hosts hostname
    secret: techvalb
    tls:
        certificate: /etc/certs/host1.crt
        key: /etc/certs/host1.key
auth: 
    htpasswd:
        realm: example.com
        path: /etc/certs/.dockerpasswd
[root@host1 ~]# mkdir -p /etc/certs
[root@host1 ~]# cd /etc/certs/
[root@host1 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout host1.key -x509 -days 365 -out host1.crt
[root@host1 certs]# htpasswd  -c -B .dockerpasswd techvlab
[root@host1 certs]# systemctl restart docker.service 
[root@host1 certs]# systemctl restart docker-distribution.service 
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab
Password: 
Error response from daemon: Get https://host1.example.com:5000/v1/users/: x509: certificate is valid for host1.example.com, not host1.example.com
[root@host2 ~]# 

Because our certificate is the self sign certificate, so explicitly we need to accept that certificate at the os layer. Simply we can copy the certificate to following location. 
[root@host1 certs]# scp /etc/certs/host1.crt host2:/etc/pki/ca-trust/source/anchors/host1.crt 
root@host2's password: 
host1.crt                         100% 2171     2.1KB/s   00:00    
[root@host1 certs]# 
Now switch to docker client, and communicate with docker repository
[root@host2 ~]# update-ca-trust enable
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab 
Password: 
Login Succeeded
[root@host2 ~]# docker tag 6b914bbcb89e host1.example.com:5000/mynewimage 
[root@host2 ~]# docker push host1.example.com:5000/mynewimage
That's how we can setup our own repository, Next post we will see, How we can setup Docker Swarm Mode. 




Thursday, March 16, 2017

NetApp Plugin for Docker Volume

NetApp Plugin for Docker Volume Management

A data volume is a specially-designated directory that by-passes storage driver management. Data volumes persist data independent of a container's life cycle. When you delete a container, the Docker daemon does not delete any data volumes. You can share volumes across multiple containers. Moreover, you can share data volumes with other computing resources in your system.

We can connect Enterprise grade storage system with docker host, So we can directly create volume from docker host to connect with containers.

Here we are going to see how we can integrate NetApp DATA ONTAP with docker using NFS.


NetApp Administration:
1. Create one SVM(Storage Virtual Machine) with Management and DATA lifs and enable NFS protocol. Like follow.




Plugin Installation on Docker Host:
1. We need to install NetApp plugin on docker host, that we can do using following steps.

[root@server1 ~]# mkdir /etc/netappdvp
[root@server1 ~]# vim /etc/netappdvp/config.json
{
    "version": 1,
    "storageDriverName": "ontap-nas",
    "managementLIF": "192.168.0.191",
    "dataLIF": "192.168.0.192",
    "svm": "svm_nfs",
    "username": "vsadmin",
    "password": "techvlab@123",
    "aggregate": "aggr1"
}
[root@server1 ~]# docker plugin install store/netapp/ndvp-plugin:1.4.0
[root@server1 ~]# docker plugin ls  => Netapp plugin will be listed here
ID                  NAME                             DESCRIPTION                          ENABLED
08d918a5f547        store/netapp/ndvp-plugin:1.4.0   nDVP - NetApp Docker Volume Plugin   true
[root@server1 ~]# docker volume create -d 08d918a5f547 --name ndvp_1
ndvp_1
[root@server1 ~]# docker volume ls
DRIVER                           VOLUME NAME
store/netapp/ndvp-plugin:1.4.0   ndvp_1
store/netapp/ndvp-plugin:1.4.0   test

We can verify the volumes at ONTAP now.