Published on

Managing DigitalOcean Infrastructure using Terraform

Authors
multividas-thumbnail-alt

Managing DigitalOcean Infrastructure; Servers, Load Balancers; using Terraform


Table of content

  • DigitalOcean Droplets (servers)
  • DigitalOcean Load Balancers

Managing DigitalOcean Infrastructure using Terraform

The DigitalOcean provider lets Terraform interact with the DigitalOcean API to build out infrastructure. This provider supports creating various DigitalOcean resources, including the following:

  • digitalocean_droplet: Droplets (servers)
  • digitalocean_loadbalancer: Load Balancers
  • digitalocean_domain: DNS domain entries
  • digitalocean_record: DNS records

www-1.tf file

Provider Configuration (digitalocean)

yaml
# www-1.tf
terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
  required_version = ">= 1.0"
}

# Provider
provider "digitalocean" {
  region     = "us-east-1"
  access_key = var.do_token
}

# Variables
variable "do_token" {
  type      = string
  sensitive = true
}

variable "pvt_key" {
  type      = string
  sensitive = true
}

Finally, you’ll want to have Terraform automatically add your SSH key to any new Droplets you create. When you added your SSH key to DigitalOcean, you gave it a name. Terraform can use this name to retrieve the public key. Add these lines, replacing terraform_ssh with the name of the key you provided in your DigitalOcean account:

yaml
data "digitalocean_ssh_key" "terraform" {
  name = "terraform_ssh"
}

name = "terraform_ssh": The name of the SSH key you previously added to your DigitalOcean account. Terraform uses this name to locate the public key associated with it.

Create a web server

yaml
# www-1.tf
resource "digitalocean_droplet" "www-1" {
  name   = "www-1"
  region = "nyc3"
  size   = "s-1vcpu-1gb"
  image  = "ubuntu-22-04-x64"

  ssh_keys = [
    data.digitalocean_ssh_key.terraform.id
  ]
}

To set up a connection that Terraform can use to connect to the server via SSH, add the following lines at the end of the file:

yaml
# www-1.tf
connection {
  host = self.ipv4_address
  user = "root"
  type = "ssh"
  private_key = file(var.pvt_key)
  timeout = "2m"
}

configure the remote-exec provisioner

yaml
# www-1.tf
# install nginx
provisioner "remote-exec" {
  inline = [
    "export PATH=$PATH:/usr/bin",
    "sudo apt update",
    "sudo apt install -y nginx"
  ]
}
sh
Output
digitalocean_droplet.www-1: Creating...

We can scale to the number of server instances we want

Add a DigitalOcean Load Balancer

yaml
# loadbalancer.tf
resource "digitalocean_loadbalancer" "www_lb" {
  name   = "www-loadbalancer"
  region = "nyc3"

  forwarding_rule {
    entry_port      = 80
    entry_protocol  = "http"
    target_port     = 80
    target_protocol = "http"
  }

  healthcheck {
    port     = 80
    protocol = "http"
    path     = "/"
    interval = 10
    timeout  = 5
    unhealthy_threshold = 3
    healthy_threshold   = 5
  }

  droplet_ids = [
    digitalocean_droplet.www-1.id,
    digitalocean_droplet.www-2.id,
    digitalocean_droplet.www-3.id
  ]
}

# Output Load Balancer IP
output "load_balancer_ip" {
  value = digitalocean_loadbalancer.www_lb.ip
}

The Load Balancer definition specifies its name, the datacenter it will be in, the ports it should listen on to balance traffic, configuration for the health check, and the IDs of the Droplets it should balance droplet_ids, which you fetch using Terraform variables.

resource "digitalocean_loadbalancer" "www_lb"

region: Specifies the DigitalOcean datacenter where the load balancer will be created (nyc3 in this case). It should match the region of your droplets to avoid cross-region latency.

forwarding_rule

  • Purpose: Configures how the load balancer forwards incoming traffic to your droplets.
  • Key Parameters:
  • entry_port: The port on the load balancer where traffic is received (port 80 for HTTP).
  • entry_protocol: Protocol for the incoming connection (http for web traffic).
  • target_port: The port on the droplets to which traffic is forwarded (port 80 for the web server).
  • target_protocol: Protocol for the forwarded traffic (http).

healthcheck

  • Purpose: Defines how the load balancer checks the health of the backend droplets to ensure traffic is only routed to healthy instances.

  • Key Parameters:

  • port: The port on the droplets that the load balancer uses for health checks (80).

  • protocol: The protocol for health checks (http).

  • path: The specific URL path used to check health (/, the root endpoint of the web server).

  • interval: Time in seconds between health checks (10 seconds in this case).

  • timeout: The duration in seconds to wait for a health check response (5 seconds).

  • unhealthy_threshold: Number of consecutive failed checks before marking a droplet as unhealthy (3 checks).

  • healthy_threshold: Number of consecutive successful checks before marking a droplet as healthy (5 checks).

droplet_ids

IDs of the droplets that the load balancer should balance traffic between.

Terraform Plan and check changes;

sh
terraform plan --out=planfile

Output;

Plan: 4 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ip_addresses     = {
      + www-1 = (known after apply)
      + www-2 = (known after apply)
      + www-3 = (known after apply)
    }
  + load_balancer_ip = (known after apply)

Use terraform show terraform.tfstate to locate the IP address of your Load Balancer:

sh
terraform show terraform.tfstate

Output;

# digitalocean_droplet.www-1:
resource "digitalocean_droplet" "www-1" {
  ...
}

# digitalocean_droplet.www-2:
resource "digitalocean_droplet" "www-2" {
  ...
}

# digitalocean_droplet.www-3:
resource "digitalocean_droplet" "www-3" {
  ...
}

# digitalocean_loadbalancer.www_lb:
resource "digitalocean_loadbalancer" "www_lb" {
  ...

  forwarding_rule {
    ...
  }

  healthcheck {
    ...
  }

  sticky_sessions {
    ...
  }
}


Outputs:

ip_addresses = {
  "www-1" = "40.45.xx.x"
  "www-2" = "40.45.xx.x"
  "www-3" = "40.45.xx.x"
}
load_balancer_ip = "x.x.x.x"

Apply changes;

sh
terraform apply -var="pvt_key=$(cat $HOME/.ssh/terraform_ssh)"

Optionally specify the terraform_ssh path as an input variable.

Outputs;

Apply complete! Resources: 4 added, 0 changed, 3 destroyed.

ip_addresses = {
  "www-1" = "40.45.xx.x"
  "www-2" = "40.45.xx.x"
  "www-3" = "40.45.xx.x"
}
load_balancer_ip = "x.x.x.x"

Creating DNS Domains and Records with Terraform

Terraform can manage DNS domains and records in addition to resources like Droplets and Load Balancers. For instance, to point a domain to a Load Balancer, you can define a configuration to establish this relationship.

Expl: Pointing multividas.com to a Load Balancer

Create a file named domain_root.tf with the following configuration:

yaml
# domain_root.tf
resource "digitalocean_domain" "default" {
  name = "multividas.com"
  ip_address = digitalocean_loadbalancer.www-lb.ip
}

Adding a CNAME Record for www.multividas.com

To create a CNAME record that points www.multividas.com to multividas.com, define the following in a new file called domain_cname.tf:

# domain_cname.tf

resource "digitalocean_record" "CNAME-www" {
  domain = digitalocean_domain.default.name
  type = "CNAME"
  name = "www"
  value = "@"
}

Applying the Configuration

After adding these DNS entries, run the following commands to plan and apply the changes:

sh
terraform plan
terraform apply

When you navigate to your domain name, you'll initially see an Nginx welcome screen. This is because the domain is pointed to the Load Balancer, which directs traffic to one of the three Nginx servers. To deploy content, you can use an Ansible playbook.

Expl: Deploying index.php to Nginx Servers

yaml
---

- name: Deploy index.php to Nginx servers
  hosts: webservers
  become: yes
  tasks:
    - name: Copy index.php to Nginx default directory
      copy:
        src: files/index.php
        dest: /var/www/html/index.php
        owner: www-data
        group: www-data
        mode: '0644'

run;

sh
ansible-playbook --key-file ~/.ssh/id_ansible -i inventory ansible-playbooks/deploy_index.yml

Repository

Find additional configurations and expls in the repository: Soulaimane Yahya - Ansible-DigitalOcean

Destroying Your Infrastructure with Terraform

In development environments where infrastructure is frequently created and destroyed, Terraform provides a way to tear down resources it manages. While this is less common in production, it can be useful for testing and development purposes.

Step 1: Create a Destruction Plan

sh
terraform plan -destroy -out=terraform.tfplan \
  -var "pvt_key=$(cat $HOME/.ssh/terraform_ssh)"

Terraform will display a plan with resources marked in red and prefixed with a minus sign (-), indicating they will be removed.

Step 2: Apply the Destruction Plan

Execute the destruction plan with the following command:

sh
terraform apply terraform.tfplan

Terraform will then destroy the resources as outlined in the generated plan.

Additional Resources