Using Terraform to Deploy Your Application to Critical Stack

Getting Started

In this lab, we are going to walk you thru how to leverage Terraform to create/deploy resources to your Critical Stack cluster. We will configure a Terraform Kubernetes provider, create a namespace to deploy our resources to and deploy our application pod. Finally, we will expose the pod using a service.

Requirements: a Critical Stack cluster and a valid kubeconfig file.

Let’s get started!

Step 1: Configure Kubernetes Provider

Before we can use Terraform to deploy anything onto our cluster, we need to setup a Kubernetes provider. To do that, we’re going to use an existing kubeconfig configuration file (if you don’t have one, click here to learn how to get one). We will assume that the file is in its default location ~/.kube/config on your computer.

Provider Setup

The easiest way to configure the provider is by creating/generating a config in a default location (~/.kube/config). That allows you to leave the provider block completely empty.

provider "kubernetes" {}

If you have more than one context in your config file, you could do something like this. Here’s an example for how to set default context and avoid all provider configuration.

kubectl config set-context default-system \
  --cluster=chosen-cluster \
  --user=chosen-user

kubectl config use-context default-system

or setup your provider to use a specific context

provider "kubernetes" {
  config_context_auth_info = "ops"
  config_context_cluster   = "mycluster"
}

For more information on kubeconfig contexts, check out the official docs

For more information on setting up a Kubernetes Provider for Terraform, check out the official Terraform documentation

Now that the provider has been specified, you are going to want to initialize Terraform, making it aware of what Provider you intend to use and to force it to download the latest version of the K8s provider. To do so, open a terminal window and type the following: terraform init, press enter. You should see something similar to the output below.

$ terraform init

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.3...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.kubernetes: version = "~> 1.11"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Excellent! Our provider has been successfully configured, so let’s move on to step 2.

Step 2: Create a namespace and deploy our Pod to it

Terraform can be used to provision many K8s resources. For this lab, we are going to use it to provision a namespace, a simple pod and a service (load balancer type) to front our application (pod). Go ahead and create a new file called main.tf and paste the code block below onto the file.

Note: This will create a namespace and pod; and expose it via port 80.

resource "kubernetes_namespace" "my-ns" {
  metadata {
    name = "tf-deployments"
  }
}

resource "kubernetes_pod" "my-pod" {
  metadata {
    namespace = kubernetes_namespace.my-ns.metadata.0.name
    name      = "my-pod-test"
    labels    = {
      app = "my-pod"
    }
  }

  spec {
    container {
      image = "nginx:latest"
      name  = "my-pod-test"

      port {
        container_port = 80
      }
    }
  }
}

Pro-Tip: A pod usually contains one or more containers that are scheduled on cluster nodes based on the memory available.

At this point, if you were to plan and deploy this via Terraform, you will have a namespace created on the cluster, and a pod (running a single NGINX container) deployed to the created namespace. However, as mentioned at the beginning - we would be exposing the application running in the POD via a Service.

Step 3: Expose our Pod via the use of a Service

To expose our Pod we need to create service, so that users can interact with our application. To make it available externally, we will use a LoadBalancer type service. The service will be capable of managing the relationship between the load-balancer and the pod. Go ahead and append the code block below to your main.tf file

resource "kubernetes_service" "my-service" {
  metadata {
    namespace = kubernetes_namespace.my-ns.metadata.0.name
    name      = "my-service-test"
  }
  spec {
    selector  = {
      app = kubernetes_pod.my-pod.metadata.0.labels.app
    }
    port {
      port        = 80
      target_port = 80
    }
    type      = "LoadBalancer"
  }
}

output "lb_ip" {
  value = kubernetes_service.my-service.load_balancer_ingress[0].hostname
}

There are a few things to point out there. First, We are deploying our service to the same namespace we create in the previous step. This is done by specifying the namespace value in the metadata section. Since we have created this namespace via terraform, and our terraform state is aware of it - we can take advantage of referencing the value via directly vs. hardcoding a name value. Second, we are doing the same with the app label seclector - to tell our service what pods it needs to select from a given namespace (clarify further). Lastly, we are leveraging the use of an output, in order to print the fully qualified domain name of the load balancer, making it easier to know how to access our pod.

Step 4: Validate and Plan your deployment

If you have been following along, by now you should have Terraform configured to use a Kubernetes Provider, and a single main.tf file that contains code to deploy three resources: a namespace, a pod, and a service. Let’s go ahead and deploy these now.

Pro-Tip: Always validate your Terraform deployment before planning.

On the same terminal window, go ahead and type:

terraform validate

If there are no syntax errors on your main.tf file, you should receive the following output: Success! The configuration is valid. Let’s go ahead and plan your deployment.

The plan will provide you an overview of planned changes, in this case we should see 3 resources (Namespace, Pod and Service) being added. Go ahead and type the following on your terminal window:

terraform plan

Enter you press enter, you should see output similar to the one below.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kubernetes_namespace.my-ns will be created
  + resource "kubernetes_namespace" "my-ns" {
      + id = (known after apply)

      + metadata {
          + annotations      = {
              + "name" = "terraform-deployments"
            }
          + generation       = (known after apply)
          + labels           = {
              + "mylabel" = "tf-deploy"
            }
          + name             = "terraform-deployments"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
    }

  # kubernetes_pod.my-pod will be created
  + resource "kubernetes_pod" "my-pod" {
      + id = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "app" = "my-pod"
            }
          + name             = "my-pod-test"
          + namespace        = "default"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }

      + spec {
          + automount_service_account_token  = false
          + dns_policy                       = "ClusterFirst"
          + host_ipc                         = false
          + host_network                     = false
          + host_pid                         = false
          + hostname                         = (known after apply)
          + node_name                        = (known after apply)
          + restart_policy                   = "Always"
          + service_account_name             = (known after apply)
          + share_process_namespace          = false
          + termination_grace_period_seconds = 30

          + container {
              + image                    = "nginx:1.14.2"
              + image_pull_policy        = (known after apply)
              + name                     = "my-pod-test"
              + stdin                    = false
              + stdin_once               = false
              + termination_message_path = "/dev/termination-log"
              + tty                      = false

              + port {
                  + container_port = 80
                  + protocol       = "TCP"
                }

              + resources {
                  + limits {
                      + cpu    = (known after apply)
                      + memory = (known after apply)
                    }

                  + requests {
                      + cpu    = (known after apply)
                      + memory = (known after apply)
                    }
                }

              + volume_mount {
                  + mount_path        = (known after apply)
                  + mount_propagation = (known after apply)
                  + name              = (known after apply)
                  + read_only         = (known after apply)
                  + sub_path          = (known after apply)
                }
            }

          + image_pull_secrets {
              + name = (known after apply)
            }

          + volume {
              + name = (known after apply)

              + aws_elastic_block_store {
                  + fs_type   = (known after apply)
                  + partition = (known after apply)
                  + read_only = (known after apply)
                  + volume_id = (known after apply)
                }
                ....
            }
        }
    }

  # kubernetes_service.my-service will be created
  + resource "kubernetes_service" "my-service" {
      + id                    = (known after apply)
      + load_balancer_ingress = (known after apply)

      + metadata {
          + generation       = (known after apply)
          + name             = "my-service-test"
          + namespace        = "default"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }

      + spec {
          + cluster_ip                  = (known after apply)
          + external_traffic_policy     = (known after apply)
          + publish_not_ready_addresses = false
          + selector                    = {
              + "app" = "my-pod"
            }
          + session_affinity            = "None"
          + type                        = "LoadBalancer"

          + port {
              + node_port   = (known after apply)
              + port        = 80
              + protocol    = "TCP"
              + target_port = "80"
            }
        }
    }

Plan: 3 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Pro-Tip: This command terraform plan gets more useful as your infrastructure grows and becomes more complex with more components depending on each other and it’s especially helpful during updates.

Step 5: Apply the plan to Deploy the App

Almost there, go ahead and apply your planned changes. On the same terminal window, go ahead and type:

terraform apply

Pro-Tip: terraform apply will take on all the hard work which includes creating resources via API in the right order, supplying any defaults as necessary and waiting for resources to finish provisioning.

You should now see output similar to the one below

$ terraform apply

kubernetes_namespace.my-ns: Creating...
kubernetes_namespace.my-ns: Creation complete after 0s [id=tf-deployments]
kubernetes_pod.my-pod: Creating...
kubernetes_pod.my-pod: Creation complete after 7s [id=tf-deployments/my-pod-test]
kubernetes_service.my-service: Creating...
kubernetes_service.my-service: Creation complete after 3s [id=tf-deployments/my-service-test]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

lb_hostname = a9f1bdf477d3b45a7a865899ff77a069-275453380.us-east-1.elb.amazonaws.com

Step 6: Verify That The Application Is Working

Having done all the necessary configurations, it’s imperative that we verify that everything we did actually works. We can verify that the application is running by using curl from the terminal:

$ curl -s $(terraform output lb_hostname)

You should see a response similar to the one below:

HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Fri, 31 Jul 2020 18:02:06 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"
Accept-Ranges: bytes

Or if you prefer, you can open your favorite browser and enter the fully qualified domain name. If everything worked as it should, you should see your NGINX welcome page.

Cleaning Up

We are done with this lab, you can remove all of the resources we have deployed by simply running the following command:

terraform destroy

This is similar to terraform plan but it will provide you with an overview of all of the things that the terraform.state file is aware of, and will prompt you if you want to delete it.

You should see output similar to the one below

$ terraform destroy
kubernetes_namespace.my-ns: Refreshing state... [id=tf-deployments]
kubernetes_pod.my-pod: Refreshing state... [id=tf-deployments/my-pod-test]
kubernetes_service.my-service: Refreshing state... [id=tf-deployments/my-service-test]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  # kubernetes_namespace.my-ns will be destroyed
  - resource "kubernetes_namespace" "my-ns" {
      - id = "tf-deployments" -> null

      - metadata {
          - annotations      = {} -> null
          - generation       = 0 -> null
          - labels           = {} -> null
          - name             = "tf-deployments" -> null
          - resource_version = "7283013" -> null
          - self_link        = "/api/v1/namespaces/tf-deployments" -> null
          - uid              = "c3682775-cd93-4930-914f-16fb92ff76d1" -> null
        }
    }

  # kubernetes_pod.my-pod will be destroyed
  - resource "kubernetes_pod" "my-pod" {
      - id = "tf-deployments/my-pod-test" -> null

      - metadata {
          - annotations      = {} -> null
          - generation       = 0 -> null
          - labels           = {
              - "app" = "my-pod"
            } -> null
          - name             = "my-pod-test" -> null
          - namespace        = "tf-deployments" -> null
          - resource_version = "7283319" -> null
          - self_link        = "/api/v1/namespaces/tf-deployments/pods/my-pod-test" -> null
          - uid              = "f2589df9-6816-43bf-90af-c4c0579abeb6" -> null
        }

      - spec {
          - active_deadline_seconds          = 0 -> null
          - automount_service_account_token  = false -> null
          - dns_policy                       = "ClusterFirst" -> null
          - host_ipc                         = false -> null
          - host_network                     = false -> null
          - host_pid                         = false -> null
          - node_name                        = "ip-10-0-4-101.ec2.internal" -> null
          - node_selector                    = {} -> null
          - restart_policy                   = "Always" -> null
          - service_account_name             = "default" -> null
          - share_process_namespace          = false -> null
          - termination_grace_period_seconds = 30 -> null

          - container {
              - args                     = [] -> null
              - command                  = [] -> null
              - image                    = "nginx:1.14.2" -> null
              - image_pull_policy        = "IfNotPresent" -> null
              - name                     = "my-pod-test" -> null
              - stdin                    = false -> null
              - stdin_once               = false -> null
              - termination_message_path = "/dev/termination-log" -> null
              - tty                      = false -> null

              - port {
                  - container_port = 80 -> null
                  - host_port      = 0 -> null
                  - protocol       = "TCP" -> null
                }

              - resources {
                }
            }
        }
    }

  # kubernetes_service.my-service will be destroyed
  - resource "kubernetes_service" "my-service" {
      - id                    = "tf-deployments/my-service-test" -> null
      - load_balancer_ingress = [
          - {
              - hostname = "a9f1bdf477d3b45a7a865899ff77a069-275453380.us-east-1.elb.amazonaws.com"
              - ip       = ""
            },
        ] -> null

      - metadata {
          - annotations      = {} -> null
          - generation       = 0 -> null
          - labels           = {} -> null
          - name             = "my-service-test" -> null
          - namespace        = "tf-deployments" -> null
          - resource_version = "7283347" -> null
          - self_link        = "/api/v1/namespaces/tf-deployments/services/my-service-test" -> null
          - uid              = "9f1bdf47-7d3b-45a7-a865-899ff77a069c" -> null
        }

      - spec {
          - cluster_ip                  = "10.254.86.52" -> null
          - external_ips                = [] -> null
          - external_traffic_policy     = "Cluster" -> null
          - load_balancer_source_ranges = [] -> null
          - publish_not_ready_addresses = false -> null
          - selector                    = {
              - "app" = "my-pod"
            } -> null
          - session_affinity            = "None" -> null
          - type                        = "LoadBalancer" -> null

          - port {
              - node_port   = 30351 -> null
              - port        = 80 -> null
              - protocol    = "TCP" -> null
              - target_port = "80" -> null
            }
        }
    }

Plan: 0 to add, 0 to change, 3 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value:

Pro-Tip: you can delete individual resources, too - but for this exercise we will delete them all.

Now, go ahead and type yes and press Enter and terraform will delete all 3 resources. Your ouput should look similar to the one below:

kubernetes_service.my-service: Destroying... [id=tf-deployments/my-service-test]
kubernetes_service.my-service: Destruction complete after 1s
kubernetes_pod.my-pod: Destroying... [id=tf-deployments/my-pod-test]
kubernetes_pod.my-pod: Destruction complete after 3s
kubernetes_namespace.my-ns: Destroying... [id=tf-deployments]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 10s elapsed]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 20s elapsed]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 30s elapsed]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 40s elapsed]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 50s elapsed]
kubernetes_namespace.my-ns: Still destroying... [id=tf-deployments, 1m0s elapsed]
kubernetes_namespace.my-ns: Destruction complete after 1m3s

Destroy complete! Resources: 3 destroyed.

Summary

Great work, in this lab you learned how to configure a Kubernetes provider for Terraform, and how to use Terraform to deploy resources to your Critical Stack Cluster.