Tuesday, October 17, 2017

SaltStack Pillar Encryption

Encrypting your Pillar data is recommended because it contains your most valuable information like passwords and keys used in your infrastructure. Pillar data is held by the Salt master and only send through an encrypted bus to Minions when used in a state file. 

Encrypting your Pillar data can be done with GPG. This means that you encrypt the values with a public GPG key. This single public key is used by all the users within your organization to encrypt sensitive information. The private key is only available on the Salt master (not the Minions!). Without the private key the encrypted data can not be decrypted.



My Pillar Path: /opt/salt/pillar/prod/
My Environment Path: /opt/salt/environments/prod/


[root@master ~]# mkdir -p /etc/salt/gpgkeys


[root@master ~]# chmod 0700 /etc/salt/gpgkeys

[root@master ~]# gpg --gen-key
gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

gpg: keyring `/root/.gnupg/secring.gpg' created
gpg: keyring `/root/.gnupg/pubring.gpg' created
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 
Key does not expire at all
Is this correct? (y/N) 
Key is valid for? (0) 
Key does not expire at all
Is this correct? (y/N) y

GnuPG needs to construct a user ID to identify your key.

Real name: vishvendra
Email address: vish@mylab.com
Comment: test keys
You selected this USER-ID:
    "vishvendra (test keys) "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

You don't want a passphrase - this is probably a *bad* idea!
I will do it anyway.  You can change your passphrase at any time,
using this program with the option "--edit-key".

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 61E46376 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   2048R/61E46376 2017-10-17
      Key fingerprint = 3F3B D495 89CC 3EAB ABDE  8BDC 1343 926E 61E4 6376
uid                  vishvendra (test keys) 
sub   2048R/EA0D4B69 2017-10-17

If you want to encrypt the data from other machine, then you can export the keys


[root@master ~]# gpg --export -a "61E46376" > /root/salt-gpg-pub.key


[root@master ~]# cp -vrf .gnupg/* /etc/salt/gpgkeys/
¿.gnupg/private-keys-v1.d¿ -> ¿/etc/salt/gpgkeys/private-keys-v1.d¿
¿.gnupg/pubring.gpg¿ -> ¿/etc/salt/gpgkeys/pubring.gpg¿
¿.gnupg/pubring.gpg~¿ -> ¿/etc/salt/gpgkeys/pubring.gpg~¿
¿.gnupg/random_seed¿ -> ¿/etc/salt/gpgkeys/random_seed¿
¿.gnupg/secring.gpg¿ -> ¿/etc/salt/gpgkeys/secring.gpg¿
¿.gnupg/S.gpg-agent¿ -> ¿/etc/salt/gpgkeys/S.gpg-agent¿
¿.gnupg/trustdb.gpg¿ -> ¿/etc/salt/gpgkeys/trustdb.gpg¿


[root@master ~]# echo -n "httpd" | gpg --armor --batch --trust-model always \ 
--encrypt -r 61E46376
-----BEGIN PGP MESSAGE-----
Version: GnuPG v2.0.22 (GNU/Linux)

hQEMAwl2kBXqDUtpAQf6AgM+X2q8EshZU+NiWP8Fjr8DGGqoh4XdKASWDKLQv+fG
9q4dtQp1o0+AXcKuwaYRG/+Q058zZC0xzHVpJ2h8d0tOWbYXUhEE4OWRmwOkF5nH
G+iYsOV24vv/6MHnkLjmJcyLlK/UyKifJi46gE/ZoN3uAlGE2C6Lt/pz6fEf3nBB
Ehjsju2Fz7IwC/w+0L0rq+pCr/svldqrQ5nruzFXktGrsA615G/Dqh+oJS/fdz8b
uzLOCH1jrhPqpp/mkvNQmQL0qS40th+qJ6ezSk814fvTEVWmKxkTGxzN3ccuDz8T
BqF9bIW1v2fxUYGWHXiObAI7L95xFJQQf4P0I0TattJAAULYcMwsVtG4/1mVR0yf
75lFkDTW6oE1e5Gx9lbzyBoc00v0s85fpjNSzlaESTkfRXxdY664832/L1ipI733
gA==
=PfTL
-----END PGP MESSAGE-----


[root@master ~]# vim /opt/salt/pillar/prod/httpd.sls
#!yaml|gpg
pkg: |
  -----BEGIN PGP MESSAGE-----
  Version: GnuPG v2.0.22 (GNU/Linux)

  hQEMAwl2kBXqDUtpAQf6AgM+X2q8EshZU+NiWP8Fjr8DGGqoh4XdKASWDKLQv+fG
  9q4dtQp1o0+AXcKuwaYRG/+Q058zZC0xzHVpJ2h8d0tOWbYXUhEE4OWRmwOkF5nH
  G+iYsOV24vv/6MHnkLjmJcyLlK/UyKifJi46gE/ZoN3uAlGE2C6Lt/pz6fEf3nBB
  Ehjsju2Fz7IwC/w+0L0rq+pCr/svldqrQ5nruzFXktGrsA615G/Dqh+oJS/fdz8b
  uzLOCH1jrhPqpp/mkvNQmQL0qS40th+qJ6ezSk814fvTEVWmKxkTGxzN3ccuDz8T
  BqF9bIW1v2fxUYGWHXiObAI7L95xFJQQf4P0I0TattJAAULYcMwsVtG4/1mVR0yf
  75lFkDTW6oE1e5Gx9lbzyBoc00v0s85fpjNSzlaESTkfRXxdY664832/L1ipI733
  gA==
  =PfTL
  -----END PGP MESSAGE-----


[root@master ~]# cat /opt/salt/environments/prod/top.sls 
prod:
  '*':
    - httpd



root@master ~]# cat /opt/salt/environments/prod/httpd/init.sls 
pkg_installation_gpg:
  pkg.installed:
    - name: {{ pillar['pkg'] }}


[root@master ~]# salt "centos-01.mylab.com" state.highstate saltenv=prod
centos-01.mylab.com:
----------
          ID: pkg_installation_gpg
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 04:40:03.764644
    Duration: 1553.839 ms
     Changes:   

Summary for centos-01.mylab.com
------------
Succeeded: 1
Failed:    0
------------
Total states run:     1
Total run time:   1.554 s

Done... Finally package must be installed which we have encrypted as value.

Wednesday, July 5, 2017

Chef integration with Jenkins

Chef Continuous integration with Jenkins 

We are going to setup with below diagram. 




From workstation we are writing cookbooks and uploading them to chef server. Now every change in cookbook we need to upload these changes manually to chef server.  

If we want to make it automated then we can use CI/CD tools like Jenkins and bamboo server. Here we are going to see Jenkins integration with chef server. 

We are going to install Jenkins on workstation. Jenkins will check for the new codes from git and upload them to chef-server and after that we can execute the chef-client on chef client server. 

we can follow Jenkins server installation on below url:  https://tinyurl.com/y8znswrn

Get login into console of Jenkins server and start creating project.

Step 1: Get started creating of free style project.



Step 2:  We need to choose git and paste the master repo path from where we want to get the codes.


Step 3:  Here we can specify poll for SCM.


Step 4: We can write bash command to execute on poll execution.


Step 5: We check the console output after click on console.


Step 6: Finally chef-client successfully executed on remote machine.


Step 7:  We can check the git contents on workstation machine.


Conclusion: Once we commit the changes in git repository, then Jenkins will pull the changes from master repo to workstation machine and execute the commands.











Friday, April 14, 2017

Setting up DNS service Add-On in kubernetes

Setting up DNS service Add-On in kubernetes


What things get DNS names?
Every Service defined in the cluster (including the DNS server itself) is assigned a DNS name. By default, a client Pod’s DNS search list will include the Pod’s own namespace and the cluster’s default domain. This is best illustrated by example:
Assume a Service named “my-service in the Kubernetes namespace dev. A Pod running in namespace dev can look up this service by simply doing a DNS query for my-service. A Pod running in namespace can look up this service by doing a DNS query for my-service.dev.

Kubernetes offers a cluster addon for DNS service discovery, which most environments enable by default. “SkyDNS” seems to be the standard DNS server of choice, since it was designed to work on top of etcd. The “kube-dns” addon is composed of a kubernetes service which, like all services, is allocated an arbitrary VIP within the preconfigured subnet (this is the IP that every other service will use for DNS); and a replication controller that will manage pods with the following containers inside them:

  1. A local etcd instance
  2. The SkyDNS server
  3. A process called kube2sky which binds SkyDNS to the kubernetes cluster
  4. A health check called healthz that monitors how DNS is being resolve


DNS IP: 10.254.0.10 We can choose an IP from cluster service range which should not allocated to any other service. 
Domain Name: kubernetes.local Defined Domain name to use.

In order to set everything up, we need to retrieve the definition files for the service and replication controller, like the following:
Note: Change the Red Marked settings according to your setup. 

[root@kube-master ~]# wget https://gist.githubusercontent.com/jamiehannaford/850900e2d721a973bc6d/raw/710eade5b8d5a382cdc6d605d6cd2d43fb0c20fb/skydns-rc.yml

[root@kube-master ~]# wget https://gist.githubusercontent.com/jamiehannaford/b80465bf7d427b949542/raw/75e7c0ff3fc740ea0f4eb54e5d10753cccf1267b/skydns-svc.yml
Now we need to setup “MASTER-IP” and Domain Name in the sysdns-rc.yaml file.
[root@kube-master ~]# vim skydns-rc.yaml
Line no. 51
- -domain=kubernetes.local 
- -kube_master_url=http://10.10.1.136:8080
Line No. 62
- -domain=kubernetes.local → Your Domain Name
- -cmd=nslookup kubernetes.default.svc.kubernetes.local localhost >/dev/null

Next we need to change the DNS server ip in skydns-svc.yaml file as follows.
[root@kube-master ~]# vim skydns-svc.yaml
Line No. 31
clusterIP: 10.254.0.10

Now we can define the service and replication controller.
[root@kube-master ~]# kubectl create -f skydns-rc.yaml

[root@kube-master ~]# kubectl create -f skydns-svc.yaml

This will create a replication controller and service under the kube-system namespace. To check their status, run:
[root@kube-master ~]# kubectl get pods --namespace=kube-system

 
[root@kube-master ~]# kubectl get services --namespace=kube-system
Once our pod is completely up-and-running, we will need to pass in the DNS server IP
and domain to all of the kubelet agents running on our minion hosts. 

To do this, we will likely need to change the config files on our  minions. 
we will need to add the following flags.

[root@minion1 ~]# vim /etc/kubernetes/kubelet
KUBELET_ARGS="--cluster_dns=10.254.0.10 –cluster_domain=kubernetes.local"
[root@minion1 ~]# systemctl restart kubelet 

We have done with the dns settings. To test DNS functions we can start one small 
pod based on busybox Image as follows.

[root@kube-master ~]# vim /root/busybox.yaml

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always

Now we can create pod using the above yaml file.
[root@kube-master ~]# kubectl create -f busybox.yaml
[root@kube-master ~]# kubectl exec busybox -- nslookup kubernetes
We can substitute kubernetes any service name that is currently running, and it will
resolve to the IP of a pod that the service ordinarily directs to.
That’s all about the DNS Add-on in Kubernetes. 



Monday, March 20, 2017

Docker Private Registry with docker-distribution

Docker Private Registry with docker-distribution 

Docker uses docker hub registry, or some other provided by Linux vendor . If you do not want to use docker hub, and you use Linux version which is not officially vendor supported , then we can create your own docker registry and push images there and thus have more control over it. 
Other reason for own/private docker registry can be that you have private / classified docker images ( Ex: Private image for banking system, Web Server, Database server, etc..) which we want to keep "in house" without exposing them to third party locations.
v2 Docker registry main advantage over docker registry v1 is better API feature set and it is worth to invest time to learn how to deploy it. This post is short to write now about all docker registry v2 APIs and I recommend to read about API features Docker Registry HTTP API V2
In order use local docker registry, we have to install and configure it and afterwards be able to push images to it.
In process below I am going to describe docker registry process setup, and I am going to use CentOS 7 as operating system. 

[root@host1 ~]# rpm -qi docker-distribution
Name        : docker-distribution
Version     : 2.6.0
Release     : 1.el7
Architecture: x86_64
Install Date: Mon 20 Mar 2017 03:37:00 PM IST
Group       : Unspecified
Size        : 12796719
License     : ASL 2.0
Signature   : RSA/SHA256, Tue 07 Mar 2017 04:56:39 PM IST, Key ID 24c6a8a7f4a80eb5
Source RPM  : docker-distribution-2.6.0-1.el7.src.rpm
Build Date  : Tue 07 Mar 2017 05:39:16 AM IST
Build Host  : c1bm.rdu2.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem 
Vendor      : CentOS
URL         : https://github.com/docker/distribution
Summary     : Docker toolset to pack, ship, store, and deliver content
Description :
Docker toolset to pack, ship, store, and deliver content
[root@host1 ~]# rpm -ql docker-distribution
/etc/docker-distribution/registry/config.yml
/usr/bin/registry
/usr/lib/systemd/system/docker-distribution.service
Here specifically, we need to have a look on the systemd unit file, Unit file starts the service based on a configuration file, following is the configuration file of docker-distribution which we can edit according to our specifications. 
[root@host1 ~]# cat /etc/docker-distribution/registry/config.yml 
version: 0.1
log:
  fields:
    service: registry
storage:
    cache:
        layerinfo: inmemory
    filesystem:
        rootdirectory: /var/lib/registry
http:
    addr: 10.10.1.131:5000                      --> My Docker host ip
    net: tcp
    host: https://host1.example.com:5000        --> My Docker hosts hostname
    secret: techvalb
    tls:
        certificate: /etc/certs/host1.crt
        key: /etc/certs/host1.key
auth: 
    htpasswd:
        realm: example.com
        path: /etc/certs/.dockerpasswd
[root@host1 ~]# mkdir -p /etc/certs
[root@host1 ~]# cd /etc/certs/
[root@host1 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout host1.key -x509 -days 365 -out host1.crt
[root@host1 certs]# htpasswd  -c -B .dockerpasswd techvlab
[root@host1 certs]# systemctl restart docker.service 
[root@host1 certs]# systemctl restart docker-distribution.service 
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab
Password: 
Error response from daemon: Get https://host1.example.com:5000/v1/users/: x509: certificate is valid for host1.example.com, not host1.example.com
[root@host2 ~]# 

Because our certificate is the self sign certificate, so explicitly we need to accept that certificate at the os layer. Simply we can copy the certificate to following location. 
[root@host1 certs]# scp /etc/certs/host1.crt host2:/etc/pki/ca-trust/source/anchors/host1.crt 
root@host2's password: 
host1.crt                         100% 2171     2.1KB/s   00:00    
[root@host1 certs]# 
Now switch to docker client, and communicate with docker repository
[root@host2 ~]# update-ca-trust enable
[root@host2 ~]# docker login host1.example.com:5000
Username: techvlab 
Password: 
Login Succeeded
[root@host2 ~]# docker tag 6b914bbcb89e host1.example.com:5000/mynewimage 
[root@host2 ~]# docker push host1.example.com:5000/mynewimage
That's how we can setup our own repository, Next post we will see, How we can setup Docker Swarm Mode. 




Thursday, March 16, 2017

NetApp Plugin for Docker Volume

NetApp Plugin for Docker Volume Management

A data volume is a specially-designated directory that by-passes storage driver management. Data volumes persist data independent of a container's life cycle. When you delete a container, the Docker daemon does not delete any data volumes. You can share volumes across multiple containers. Moreover, you can share data volumes with other computing resources in your system.

We can connect Enterprise grade storage system with docker host, So we can directly create volume from docker host to connect with containers.

Here we are going to see how we can integrate NetApp DATA ONTAP with docker using NFS.


NetApp Administration:
1. Create one SVM(Storage Virtual Machine) with Management and DATA lifs and enable NFS protocol. Like follow.




Plugin Installation on Docker Host:
1. We need to install NetApp plugin on docker host, that we can do using following steps.

[root@server1 ~]# mkdir /etc/netappdvp
[root@server1 ~]# vim /etc/netappdvp/config.json
{
    "version": 1,
    "storageDriverName": "ontap-nas",
    "managementLIF": "192.168.0.191",
    "dataLIF": "192.168.0.192",
    "svm": "svm_nfs",
    "username": "vsadmin",
    "password": "techvlab@123",
    "aggregate": "aggr1"
}
[root@server1 ~]# docker plugin install store/netapp/ndvp-plugin:1.4.0
[root@server1 ~]# docker plugin ls  => Netapp plugin will be listed here
ID                  NAME                             DESCRIPTION                          ENABLED
08d918a5f547        store/netapp/ndvp-plugin:1.4.0   nDVP - NetApp Docker Volume Plugin   true
[root@server1 ~]# docker volume create -d 08d918a5f547 --name ndvp_1
ndvp_1
[root@server1 ~]# docker volume ls
DRIVER                           VOLUME NAME
store/netapp/ndvp-plugin:1.4.0   ndvp_1
store/netapp/ndvp-plugin:1.4.0   test

We can verify the volumes at ONTAP now.