IaC World-1: Terraform 101

Ozan Eren
17 min readFeb 27, 2021

Hello,
I thought of starting a new series that tackles topics that could be involved in the IaC (Infrastructure as Code) approach. I wanted to start this series with Terraform first. In fact, I wish you to read Terraform which has been heard and known by many people once again from me. In this first article of the series, you will see general information about IaC, general information about Terraform, and some examples of basic scenarios for Terraform. Also, according to my plans, I intend to share more than one article for any tool in this series.

About IaC

As the name suggests, Infrastructure as Code (IaC), it is the conversion of infrastructure into code. The “code” we are talking about here includes all runtime environments, network settings, resource needs and their parameters that our environments (infrastructures) may need. As the code changes over time, it is necessary to keep it in a VCS (Version Control System) like Git and to version it. These predefined files (manifests) are later used to set up the infrastructure quickly and flawlessly using different tools. The design and creation processes of the code may vary depending on the competencies and rules of the tool used. For example, in the past, IaC logic was used as IT workers could run scripting or commands one after the other. However, the methods that are valid today are related to the development and coding of this work with tools in a completely different framework. Here, we create and execute our codes within the framework of the rules and competencies of these tools.

So why and how do we do this? What are the benefits and differences of existing tools? Let’s try to understand the IaC better with answers to these questions.

Problems… Why and How IaC?

As in the past and today, one of the most important things that will always be is how the existing software-service will be offered. The success of the work is largely shaped accordingly. In the past, when companies mostly provided services on physical servers, the infrastructure they had was of the type that we can call “static”. When we say static infrastructure, let’s think of a data center. Disks, RAMs on the server; network elements such as routers, switches … Let’s consider the costs of setting up and configuring them from scratch for large companies that provide great services … Let’s consider the approval processes and hardware deliveries that need to go through in order to get new hardware for the purpose of capacity increase (horizontal-vertical scale) … Let’s think about the processes of integrating and configuring a lot of servers and routers… Anyway, let’s not think more :) :) These are really time and cash-wasting problems.

Fortunately, there are “dynamic” types of infrastructures that solve them today. The above problems are no longer worse for us. We can easily handle these with cloud providers, virtualization technologies, configuration management tools, perhaps with a single click. However, there are also problems for such structures, which we can call dynamic. To give a few examples:

  • Although we can easily create the infrastructure in cloud providers or other virtualization environments, the speed and reliability of this process is still very important for us.
  • Infrastructure is ready. But still not configured. Doing this manually in an architecture we have many VMs is a huge waste of time.
  • What if we need to quickly install the same architecture we built on AWS on Azure, GCP, for example?
  • We prepared our infrastructure, but it happened that there was a problem and we need to re-install this entire infrastructure. How do we go back to the beginning and establish the first immutable form of this infrastructure?

These problems we mentioned may be the answer to the question of “why we developed and use the IaC concept”. In fact, in the past, IaC logic was used as IT workers were able to run scripting or commands one after the other. But what we are talking about today is about the development and coding of this business with tools in a completely different framework.

IaC Concepts

Starting from the IaC, I want to explain a few concepts so that we don’t get confused in some matters. If you start reading about this, you are very likely to see these concepts.

→ Mutable — Immutable Infrastructure

First, let’s see what “mutable” and “immutable” mean.
The word “mutable” means “variable” and “immutable” means “constant, non-variable” as its opposite. When we say “mutable” infrastructure, we can give examples of traditional systems that are always open to configurations and other changes. When we think about it, when we set up a traditional system, then connect with SSH on this system and create updates, environment additions etc. in software packages. We were doing such processes one by one. Frankly, this is still the case in most places. That’s why we call these systems “mutable infrastructure”. In addition, although the operations we do for a group of servers in such an environment are the same, as a result, some servers can be completely different and unique. With this thin configuration that is difficult to diagnose and recreate, its errors are called “configuration drift”.

On the other hand, “immutable infrastructure”; use ready-made structures such as VM or container image that creating by IaC tools such as Terraform or CloudFormation. These ready-made images, created with the help of tools such as Docker and VMware, contain all the necessary configurations of the infrastructure to be installed. Thus, there is no need for major configuration changes on the infrastructure after the installation. If there is a need for major configuration changes in the infrastructure, new images are created and as best practice, the images are versioned with a source control (eg Git). Servers are rebooted with the new image created and the old ones are terminated. Thus, the “configuration drift” situation is eliminated and the changes made are more reliable. This process is still somewhat slow.

Although there is no single truth between the concepts of mutable and immutable infrastructure, the benefits and disadvantages of these two types of infrastructure are different from each other. If you want to get more detailed information on the subject, I strongly recommend you to check the article here.

→ Declarative-Imperative(Procedural) Code

Two of the other concepts are the concepts of “declarative” and “imperative (procedural)”. First of all, choosing the appropriate tool is very important in the IaC approach. You should choose the right tool according to your competencies, goals and the language used by this tool. Otherwise, the intended goals may not be achieved or you may have difficulties in this way. Therefore, these two concepts will help in tool selection.

In short, in the “declarative” approach, we list what we need in all details and specify what we need to have after code execution by coding in the language of the tool we use. While doing this, we do not need to tell the tool we use how the tool can do this job. In a way, we give the answer to the question of “what is necessary”. In the “imperative” (sometimes referred to as “procedural”) approach, we specify by coding how to create the necessary resources. Here, too, we answer the question of “how to do”.

With a small example, we can understand these two concepts better. For example, you wanted to have a coffee while reading this article at your table. How would you express this with “declarative” and “imperative” approaches…? To express such a situation with a “declarative” approach:

“Can I have my flat white coffee on my table while reading Ozan’s Medium article titled ‘IaC World-1: Terraform’?” It is sufficient to declarative approach. The waiter in the cafe will bring your coffee to your table soon :)

With the “imperative (procedural)” approach, we could express this situation as follows:

“While reading Ozan’s Medium article titled” IaC World-1: Terraform “, I want to drink flat white coffee. To do this, you must do the following: Prepare 40ml of double espresso, heat 80ml of milk into cream, slowly pour cream milk over double espresso. Bring it to the table. ” It is required for the imperative (procedural) approach. The waiter in the cafe will soon bring your coffee to your table. :)

Finally, we can understand the declarative-imperative approach through the picture below:

Source: https://www.copebit.ch/how-declarative-and-imperative-styles-differ-in-infrastructure-as-code/

Again, as in the “mutable-immutable” issue, there is no “clear truth” here. It may be better for you to define your infrastructure sometimes as “declarative” and sometimes as “imperative (procedural)”. There are different tools that use both approaches. Sometimes we even use these tools together. You should decide this according to your case. As a result, that coffee comes in all kinds so :)

→ Configuration Management Tools — Provisioning Tools

Finally, let’s look at the interpretation of these two different terms. Tools used in the IaC world are generally divided into “Configuration Management Tools” and “Provisioning Tools”.

“Configuration Management Tools” are tools designed for configuring on existing infrastructure. Some of the things we can do with these tools are: Identifying users on the OS, changing the status of services (start-stop-restart), installing and updating packages for the software environment, etc.

  • Chef
  • Puppet
  • Ansible
  • SaltStack

“Provisioning Tools”, on the other hand, can include all of the tools that prepare the items that make up your planned infrastructure (VM, database servers, load balancers, VPC, subnet, firewall, etc.). These tools usually call the APIs of providers (VMware, AWS, k8s etc.) and create the necessary infrastructure. Examples of these are:

  • Terraform
  • CloudFormation
  • OpenStack Heat

We can see a summary of all the titles so far and more in the image below:

As you can see in the table above, there are many tool alternatives. Which of these should be used is entirely up to you. You should make the necessary evaluation from every angle and decide accordingly. One of the common thoughts is that this is the hardest and most important part of the job… Choosing the right vehicle… Also, there are other concepts that may be useful to know in the IaC world. However, I found it more correct to explain them to you in other articles so that this article does not get longer.

Now, let’s see what “Terraform” I have been working with recently…

Quick Introduction

Terraform was published as an initial release on July 28, 2014 by HashiCorp company. Along with being an open source tool, “Terraform Enterprise” version is also available. It is written in “Go” language. In its simplest form, if we talk about what it does, it allows you to make changes on this code by defining your existing or new infrastructure structures as declarative code and the changes you make to be tagged according to your environment type (test / prod, etc.). The code we are talking about here is the “HCL (Hashicorp Configuration Language)” developed by HashiCorp, which is very easy to read, write and understand. HCL is a syntax that is also used in other products of HashiCorp. Of course, this is one of the simplest definitions I can convey to you.

Why Terraform in my opinion?

Terraform is a preferred tool to perform mission-critic tasks with high reliability and to get rid of unnecessary workload in many different areas such as SDN (Software-Defined-Networking), the establishment of multi cloud structures, as well as working integrated with many DevOps tools. . For my part, there has not yet been a specific tool that I have to use due to its distinctive feature in the IaC world. But I would like to talk a little bit about why I started learning the IaC approach with a provisioning tool like Terraform. I can briefly list these reasons:

  • HCL, the language in which terraform was written, seemed understandable to me.
  • Moreover, more importantly than this, Terraform could be integrated with a wide variety of providers.
  • Their documentation seemed very understandable and simple to me. You can see the detailed documentation page here.
  • Creating immutable infrastructure by providing seamless infrastructure changes.
  • Ability to work with many providers. Here you can see the list of all providers Terraform can work with.

Terraform Core Concepts

There are a few concepts in Terraform that we need to know. According to these concepts, we write the code that will create our infrastructure. While explaining this part, I’ll be using my GitHub repository tag below. You can browse if you wish:

  • Variables

Variables refers to the inputs we use in Terraform. The variables we define according to our request are structures in the form of key-value that enable us to make all kinds of customization. In Terraform, we can store them in a file like “variables.tf”. For example, ext_port, storage name, container name, etc. Below you can see an example of how “int_port” and “ext_port” variables are used in the “main.tf” file in Terraform where we define the main code in Terraform:

Variables Usage

In addition, in the code example above, we can give the value of a variable in 2 different ways as “dev” and “prod”. We will examine how we say which one to use later in the article.

A quick note: we can think of it as if we were actually calling a different value with the “$” symbol. This is called “Interpolation syntax” and it is a condition frequently used in Terraform. More detailed information can be found here.

  • Provider

We can say that they are plugins that enable Terraform to make API calls to use the services. Thanks to these plugins, we actually create our infrastructure by communicating with the services we want. We can give example for “provider” in Terraform like Docker, VMware or cloud services such as AWS. As I mentioned before, you can see here all the providers Terraform can work with.

  • Modules

It refers to the folder structures in which the Terraform files for which the Infrastructure is defined are created hierarchically. Thus, in Terraform, you can use configurations by calling them over and over without repeating the code. The best way to use your code in Terraform effectively, to test your code and to create a modular infrastructure is to use the “modules” structure.

By default, actually each Terraform configuration is defined in a module called “Root Module”. We can think of it as follows:

root-module (We don't have to name like this.)
└ main.tf
└ outputs.tf
└ variables.tf

We can create modules for many different purposes in Terraform. For example, we can create separate modules for staging and production environments. As another example, when working with the Docker provider, we can define modules such as “image” and “container” and call them in a root module. We see this scenario in the example below:

Modules Usage

You can better understand the file hierarchy from the link below:

  • State

It contains cached information about the infrastructure and related configurations managed by Terraform. We will see a component named “Terraform Core” in the next title where I will explain the working logic of Terraform. “Terraform Core” keeps the latest state of the infrastructure in a state and applies it if there is a change (add, delete, update etc.) on your infrastructure configurations the next time you run your code. If not, your infrastructure will remain as it was. Also, in case you manually change your infrastructure, Terraform will rearrange the infrastructure according to its here, using the state type as reference the next time you run it. Thus, “immutable infrastructure” is provided.

  • Resources

It defines the objects (VM, container, DNS records, virtual networks etc.) that constitute the infrastructure and we configure. In fact, resources are defined as blocks and each block can contain one or more objects. Each resource may have many different features, but only a small part is sufficient for initial use. For example, if we are going to create an instance on AWS, we can do it as follows:

Resources Usage
  • Outputs

They are the return values that can be used by other Terraform modules of Terraform within a module. We can understand better with the example below:

Outputs Usage

Terraform Working Principle

When we want to create an infrastructure using Terraform, our configuration files basically go through 4 stages and as a result, our infrastructure comes ready:

Source: https://www.ovh.com/blog/private_cloud_and_hashicorp_terraform_part1/

Init → Initializes all Terraform files on Workspace.

Plan → An execution plan is created to achieve a desired state of the infrastructure. Sometimes changes are made in configuration files or plans can be created with different parameters for different environments (dev or prod). In such a case, the “Plan” step is repeated and different plans are obtained.

Apply → It is one of the stages where the infrastructure implements the changes in its actual/current state to achieve the desired status. “Apply” stage is completed through a Terraform “Plan”. After this stage, we have the desired infrastructure at hand.

Destroy → It is used to delete all old infrastructure resources that need to be terminated after the “Apply” stage.

When we look at the architecture of Terraform, all the steps mentioned above are actually done by the component called “Terraform Core” through CLI with commands such as “terraform init”, “terraform plan”, “terraform apply”, “terraform destroy”. If you remember, we saw this as a structure that keeps the “state” in the above parts of the article.

Let’s take a look at how Terraform works step by step with the following Terraform architecture diagram:

Source: https://geekflare.com/terraform-for-beginners/

First, “Terraform Core” processes the configuration files you created. Thus, in the second step, “Terraform Core” updates the “Terraform State” and enables the infrastructure to state the desired current form. Afterwards, what needs to be done (update, add, delete) is determined and applied on the infrastructure. While doing these, of course, it performs the 4 stages (init-plan-apply-destroy) we mentioned above, depending on the situation. As a result, an infrastructure object is created on the provider we work with.

Finally, you can watch the video below, which is complementary. HashiCorp Co-Founder Armon Dadgar describes Terraform in a very simple and understandable way. Thus, you can complete the missing places in your mind.

Docker Container Provisioning with Terraform

In this section, I will show a basic example with Docker, one of the providers of Terraform.

Important note! → I have worked with Terraform v0.11.13 at the time of writing this article. This version is an old Terraform version. There may be differences in syntax writing in Github repository with the current version. Therefore, if you are using a different version in your trials, you may get errors. You can find the necessary explanation here and here. You can also check here for a solution to “interpolation syntax deprecated” errors. I will be updating the content of the repository from time to time. Thank you for your understanding…

First of all, if Terraform is not installed in your environment, you can easily install it by following the steps below:

  • You can download it here to install the latest release file for your preferred environment. I preferred Linux(CentOS) environment. You can also install it in environments such as Windows, MacOS:
  • Then we need to unzip the downloaded file and put the binary file under “/usr/local/bin” :
  • We are now ready to use Terraform. You can test it as follows:
  • For the demo, I will be using my following GitHub repository, folder is (docker → basics) :

Our scenario will be as follows: We will create containers with different features for our two different environments named “Prod” and “Dev”. While doing this, of course, we will create different Terraform plans and apply. I will try to specify the details in the other steps.

Below you can see the files used for this purpose:

If we want to talk about the code above, we can say the following:

→ We will use different external ports for two different environments. We will be using port 8081 for the “dev” environment and 80 port for the “prod” environment. The internal port we use will be the same for both environments.
→ Likewise, we will be using “ghost: latest” image for “dev” environment and “ghost: alpine” image for “prod” environment.
→ Also, variables such as “container_name”, “image_name” will change according to the environment type.

In fact, when you examine all these properties, under “variables.tf”, a variable of type “map” is defined.

  • First of all, we will go to the relevant folder and run our first command, “terraform init”. So Terraform will now know that we are working here.

Then, we can first create our plan for the “dev” environment. To do this, we need to run the command terraform plan -out=tfdev_plan -var env=dev . Here, as “-var env = dev”, we provide a variable that was not previously defined in the Terraform configuration but required as an input. We need to do this every time for the plan to create, for the resources to be destroyed (for Terraform version 0.11.3). As you can see Terraform has defined the resources to be created for us:

  • As you can see, we have created a plan called “tfdev_plan”. Now, we can apply this plan we have created. As you can see in the picture below, all the planned resources have been created by Terraform for us. Also, at the bottom, under “Outputs”, we can see the outputs of the variables that we defined in the “outputs.tf” file:
  • When we check it on Docker, we can see that the container has been created. As we configured in Terraform, we used the port “8081” as an external port for the “dev” environment and this port is mapped to 2368 internally. The sample Ghost Blog page can be accessed by typing “IP: 8081” for the “dev” environment and “IP: 80” for the “prod” when our container comes up:
  • Later, we can destory the “dev” environment to create our other environment, or we can directly create our new “prod” plan and have Terraform re-create the infrastructure accordingly.
    Here I prefer to destroy as more graceful. To do this, we need to run the following command:
  • At this stage, we have a fresh environment again. So, we deleted the previous “dev” environment resources. Let’s create a plan for the “prod” environment:
  • We can apply this plan we have created in the same way as follows:
  • Our container will be created as seen below:
  • After we are done, we can remove this environment if we want. We will do this again with the “terraform destroy” command, but this time we can give the “-auto-approve” flag and delete it directly without waiting for us to confirm.

We have come to the end of the article here. I will show you in next articles, how you can use Terraform to create and manage your Kubernetes, Docker, Jenkins and any cloud environments.

See you in the next article!

--

--