Skip to main content

Virtual Machine Provisioning on Microsoft Azure Using Terraform



What is Terraform:  Terraform provides a flexible abstraction of resources and providers. This model allows for representing everything from physical hardware, virtual machines, and containers, to email and DNS providers.

Image result for Terraform with azure cloud images


Terraform vs. Chef, Puppet, etc.

Configuration management tools install and manage software on a machine that already exists. Terraform is not a configuration management tool, and it allows existing tooling to focus on their strengths: bootstrapping and initializing resources.

Using provisioners, Terraform enables any configuration management tool to be used to setup a resource once it has been created. Terraform focuses on the higher-level abstraction of the datacenter and associated services, without sacrificing the ability to use configuration management tools to do what they do best. It also embraces the same codification that is responsible for the success of those tools, making entire infrastructure deployments easy and reliable.


Setting up Terraform on Centos: 

Terraform is available as single binary package for Linux distributions, which can be downloaded from https://www.terraform.io/downloads.html 

Unzip the binary package and place it in the PATH of the machine. i.e: /bin/terraform 



[root@node1 ~]# terraform --version
Terraform v0.11.3

[root@node1 ~]# which terraform
/usr/bin/terraform
[root@node1 ~]#

Terraform allows us to define and create complete infrastructure deployments in Azure. We build Terraform templates in a human-readable format that create and configure Azure resources in a consistent, reproducible manner.. 

First of all, we need to install Azure-cli on the terraform node. So terraform can interact with the azure. And we can do this using following steps. 


[root@node1 ~]# rpm --import https://packages.microsoft.com/keys/microsoft.asc
[root@node1 ~]# cat /etc/yum.repos.d/azure-cli.repo
[azure-cli]
name=Azure CLI
baseurl=https://packages.microsoft.com/yumrepos/azure-cli
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
[root@node1 ~]# yum install azure-cli


We can login into the Azure environment using the bash shell environment variables or we can define details of our environment in terraform templates. First of all, we need to retrieve the details of our environment from Azure.  



[root@node1 ~]# az login --username XXXXXXXXXX@hotmail.com --password XXXXXXXXXX

Then we need to create a service-principal to get interacted with Microsoft Azure service. 



[root@node1 ~]#  az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/your_subscription_id_from_previoud_command"
This command will give you the output in the follow format. 
{
  "appId": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  "displayName": "azure-cli-2017",
  "name": "http://azure-cli-2017-12-12-06-46-08",
  "password": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  "tenant": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

Capture the AppID, password, and tenant id from the above output. 

Now we can define a terraform environment. The good thing about Terraform is that we can write the template anywhere and then we can initialize and apply it. 


First of all I am writing my template here to create the virtual machine in Azure environment. We need to define all the components or details of terraform within template. 

You can download the template from here

Or you can write it as below. 



[root@node1 ~]# mkdir Terraform_Templates
[root@node1 ~]# vim Terraform_Templates/azure.tf
provider "azurerm" {
      subscription_id="Your_subscrption_from_az_login_cli"
      client_id="your_application_id_from_the_same_cli"
      client_secret="Your_password_"
      tenant_id="Tenant_id_from_the_same_cli"
}
resource "azurerm_resource_group" "rg" {
    name = "testResourceGroup"
    location = "westus2"
    tags {
          environment = "Terraform Demo"
        } 
}

# Creating a virtual network now. 
resource "azurerm_virtual_network" "myTestNetwork" {
    name            = "myVnet"
    address_space   = ["10.0.0.0/16"]
    location        = "West US 2"
    resource_group_name = "${azurerm_resource_group.rg.name}"

    tags {
        environment = "Terraform Demo"
        }
}

# Create a subnet.
resource "azurerm_subnet" "myTestSubnet" {
    name                 = "mySubnet"
    resource_group_name  = "${azurerm_resource_group.rg.name}"
    virtual_network_name = "${azurerm_virtual_network.myTestNetwork.name}"
    address_prefix       = "10.0.1.0/24"
}

# Creating the public IPs. 
resource "azurerm_public_ip" "myPublicIP" {
    name        = "myPublicIP"
    location    = "West US 2"
    resource_group_name = "${azurerm_resource_group.rg.name}"
    public_ip_address_allocation = "dynamic"

    tags {
        environment = "Terraform Demo"
        }
}

# Creating the network Security Group and rules. 
resource "azurerm_network_security_group" "mySecurityGroup" {
    name          = "myNetworkSecurityGroup"
    location      = "West US 2"
    resource_group_name = "${azurerm_resource_group.rg.name}"

    security_rule {
      name      = "SSH"
      priority  = 1001
      direction = "Inbound"
      access    = "Allow"
      protocol  = "Tcp"
      source_port_range = "*"
      destination_port_range = "22"
      source_address_prefix   = "*"
      destination_address_prefix = "*"
      }
    
    tags {
        environment = "Terraform Demo"
        }
}

#Creating network interface
resource "azurerm_network_interface" "myNic" {
    name  = "myNIC"
    location = "West US 2"
    resource_group_name = "${azurerm_resource_group.rg.name}"
    network_security_group_id = "${azurerm_network_security_group.mySecurityGroup.id}"

    ip_configuration {
      name      = "myNicConfiguration"
      subnet_id = "${azurerm_subnet.myTestSubnet.id}"
      private_ip_address_allocation = "dynamic"
      public_ip_address_id  = "${azurerm_public_ip.myPublicIP.id}"
      }

    tags {
      environment = "Terraform Demo"
     }
}
# Generate random text for an unique storage account name.
resource "random_id" "randomID" {
    keepers = {
        resource_group  = "${azurerm_resource_group.rg.name}"
        }
    byte_length = 8
}

# Create storage account for boot diagnostics
resource "azurerm_storage_account" "mystorageaccount" {
    name      = "diag${random_id.randomID.hex}"
    resource_group_name = "${azurerm_resource_group.rg.name}"
    location  = "West US 2"
    account_tier = "Standard"
    account_replication_type  = "LRS"

    tags {
      environment = "Terraform Demo"
    }
}

# create a virtual machine
resource "azurerm_virtual_machine" "myterraformvm" {
  name      = "myVM"
  location  = "West US 2"
  resource_group_name = "${azurerm_resource_group.rg.name}"
  network_interface_ids = ["${azurerm_network_interface.myNic.id}"]
  vm_size   = "Standard_F2s_v2"

  storage_os_disk {
    name    = "myOsDisk"
    caching = "ReadWrite"
    create_option = "FromImage"
    managed_disk_type = "Standard_LRS"
   }
  storage_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "16.04.0-LTS"
    version   = "latest"
    }

  os_profile {
    computer_name = "myvm"
    admin_username  = "azureuser"
   }
  os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path    = "/home/azureuser/.ssh/authorized_keys"
      key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDAjdjirf79Se2O0BmiSSshhPyd
                         d0tKC8AVIjbUMDsLV3BaALrK7nzRHbctsVnftjmFkXP4X8bQvm6M/XoSGtu/dDqf
                         5qRreg0+dOJdHCLP1Yhg+M4VWpjLadxXgfXIZPum6pGrAPdm8ZWyQpVSHJrhx
                         Mlo+69Wol5lVx91oQ0yKptvQSy1HA0Y3ubUNcwfVEzSaQlLGK5CdS6vgVoe62R8
                         Z7UTn5SzDCvIBKxPPqvQzVD6IrtHmqP2LErL0TtCK4YOBrUygIk+9Bv2Y0xaHS2
                         Mz0wR4C+Tf+C2zxx8X6cIHl5JC7n7iqPgPiXvRHd/MSSxCIcvUWxhJZMy8okvgkj
                         H root@jenkins.example.com"
      }
   }
  boot_diagnostics {
    enabled = "true"
    storage_uri = "${azurerm_storage_account.mystorageaccount.primary_blob_endpoint}"
    }
    tags {
      environment = "Terraform Demo"
     }
}

Save and exit the file. Now we can initialize the terraform in the same directory using the following command. 


[root@node1 Terraform_templates]# terraform init

Before executing the template if you want to check that what and how many resources are going to be created. You can use below command. 


[root@node1 Terraform_templates]# terraform plan


Finally, we can apply that plan and wait for 10 minutes around machine will be provisioned based on the details which we have given there. 
[root@node1 Terraform_templates]# terraform apply


After machine get launched successfully we can get the public IP of the machine and we can access the same. 
[root@node1 ~]# az vm show --resource-group myTestGroup --name myVM -d --query [publicIps] --o tsv

Now you can use listed IP to access the remote machine.
[root@node1 ~]# ssh Public_IP -l azureuser


In next post, we will see how we can use variables in the Terraform Template. 

Till then Test it, Review it and give your input. 

Comments

Post a Comment

Popular posts from this blog

Docker Container Management from Cockpit

Cockpit can manage containers via docker. This functionality is present in the Cockpit docker package. Cockpit communicates with docker via its API via the /var/run/docker.sock unix socket. The docker API is root equivalent, and on a properly configured system, only root can access the docker API. If the currently logged in user is not root then Cockpit will try to escalate the user’s privileges via Polkit or sudo before connecting to the socket. Alternatively, we can create a docker Unix group. Anyone in that docker group can then access the docker API, and gain root privileges on the system. [root@rhel8 ~] #  yum install cockpit-docker    -y  Once the package installed then "containers" section would be added in the dashboard and we can manage the containers and images from the console. We can search or pull an image from docker hub just by searching with the keyword like nginx centos.   Once the Image download...

Remote Systems Management With Cockpit

The cockpit is a Red Hat Enterprise Linux web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. In RHEL 8 Cockpit is the default installation candidate we can just start the service and then can start the management of machines. For RHEL7 or Fedora based machines we can follow steps to install and configure the cockpit.  Following are the few features of cockpit.  Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions Installation of cockpit package.  [root@rhel8 ~] #  dnf   install cockpit cockpit-dashboard  -y  We need to enable the socket.  [root@rhel8 ~] #  systemctl enable --n...

Add The Group Information IN Yum Repository in simple Two steps

= Yum groups and repositories = Yum supports the group commands   * grouplist   * groupinfo   * groupinstall   * groupremove   * groupupdate Groups are read from the "group" xml metadata that is optionally available from each repository. If yum has no repositories which support groups then none of  the group operations will work.  #yum grouplist    This will list the installed and available groups for your system in two    separate lists. If you pass the optional 'hidden' argument then all of     the groups which are set to 'no' in the group xml tag.   yum groupinfo groupname     This will give you detailed information for each group including:   description, mandatory, default and optional packages.       #yum groupinstall groupname      #yum groupupdate groupname   Despite their differing names both of these commands perform the same   func...