Post

Build a Private DNS Resolver with Pi-hole + Unbound on AWS using Terraform

Complete step-by-step guide to deploying a private DNS resolver and ad-blocker using Pi-hole and Unbound on AWS EC2 with Tailscale access - all managed through Terraform Infrastructure as Code.

Build a Private DNS Resolver with Pi-hole + Unbound on AWS using Terraform

Tech Stack

Terraform
Terraform
AWS
AWS
Docker
Docker
Pi-hole
Pi-hole

Goal: Deploy a private DNS resolver that blocks ads and trackers across all your devices, accessible via Tailscale, costing under $8/month with zero manual configuration.


Why This Architecture?

Pain Point How This Stack Solves It
ISP DNS tracking & logging Unbound provides recursive DNS resolution without third-party logging.
Ads and trackers on all devices Pi-hole blocks malicious domains at the DNS level network-wide.
Complex VPN setup for remote access Tailscale provides zero-config mesh networking with WireGuard.
Manual server maintenance & updates Terraform codifies everything; rebuild with latest AMI in minutes.
Expensive always-on infrastructure t4g.nano ARM instance costs ~$4/month with minimal resource usage.

High‑Level Architecture

Private DNS Resolver Architecture
📱 💻 📱
Your Devices
Phone • Laptop • Tablet
DNS Queries (encrypted)
🔒
Tailscale Mesh Network
Zero-Config WireGuard VPN
Device Auth
Encrypted Tunnels
Secure tunnel to AWS
☁️
AWS Cloud (us-east-1)
VPC (10.0.0.0/16) - Private Subnet
🖥️
EC2 Instance (t4g.nano ARM)
🛡️
Pi-hole
DNS Filter
Port 53
🔍
Unbound
Recursive Resolver
Port 5335
Docker Engine
VPC Endpoints (SSM • CloudWatch • S3)
🔐
SSM Parameter Store
Secrets
📊
CloudWatch Logs
Monitoring
🏗️
Terraform Management
Infrastructure as Code
Terraform CLI
Remote State

Key Benefits:

  • Privacy-first: No DNS queries leave your control
  • Network-wide blocking: Ads blocked on all devices automatically
  • Zero-trust security: No open ports - Tailscale handles all access control
  • Cost-effective: ~$8/month for complete DNS infrastructure
  • Infrastructure as Code: Everything version-controlled and reproducible
  • Global access: Secure DNS from anywhere via Tailscale

All resources are created in us‑east‑1 for optimal cost and performance.


Prerequisites

  • AWS CLI configured with appropriate credentials
  • Terraform ≥ 1.0 installed locally
  • Tailscale account (free tier sufficient)
  • Basic familiarity with Docker and DNS concepts

1 Set Up Tailscale Authentication

  1. Create a Tailscale account at tailscale.com (free tier works perfectly).
  2. Generate an ephemeral auth key in the admin console:
    • Go to SettingsKeys
    • Click Generate auth key
    • Enable Ephemeral (key expires when device disconnects)
    • Copy the key (starts with tskey-auth-)
  3. Store the key in AWS SSM Parameter Store:
    1
    2
    3
    4
    5
    
    aws ssm put-parameter \
      --name "/pihole/tailscale-auth-key" \
      --value "tskey-auth-your-key-here" \
      --type "SecureString" \
      --region us-east-1
    

Security Note: Ephemeral keys automatically clean up disconnected devices and are perfect for Infrastructure as Code deployments.


2 Infrastructure as Code

Here are the key Terraform files that create our private DNS infrastructure. You can grab the complete versions from my repo.

main.tf — Provider & Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
terraform {
  required_version = ">= 1.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }

  # Optional: Configure remote state
  # backend "s3" {
  #   bucket = "your-terraform-state-bucket"
  #   key    = "pihole/terraform.tfstate"
  #   region = "us-east-1"
  # }
}

provider "aws" {
  region = var.aws_region

  default_tags {
    tags = {
      Project     = "pihole-dns"
      Environment = "prod"
      ManagedBy   = "terraform"
    }
  }
}

locals {
  name_prefix = "pihole"
  vpc_cidr = "10.0.0.0/16"

  private_subnets = [
    {
      cidr = "10.0.1.0/24"
      az   = "${var.aws_region}a"
    },
    {
      cidr = "10.0.2.0/24"
      az   = "${var.aws_region}b"
    }
  ]
}
variables.tf — Input Variables
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
variable "aws_region" {
  description = "AWS region for deployment"
  type        = string
  default     = "us-east-1"
}

variable "instance_type" {
  description = "EC2 instance type for Pi-hole server"
  type        = string
  default     = "t4g.nano"

  validation {
    condition = contains([
      "t4g.nano", "t4g.micro", "t4g.small"
    ], var.instance_type)
    error_message = "Instance type must be a supported ARM instance."
  }
}

variable "tailscale_auth_key_ssm_path" {
  description = "SSM Parameter Store path for Tailscale auth key"
  type        = string
  default     = "/pihole/tailscale-auth-key"
}

variable "pihole_admin_password" {
  description = "Admin password for Pi-hole web interface"
  type        = string
  default     = "changeme123!"
  sensitive   = true
}

variable "log_retention_days" {
  description = "CloudWatch log retention period in days"
  type        = number
  default     = 7
}
vpc.tf — Private Network & VPC Endpoints
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# Create VPC for Pi-hole deployment
resource "aws_vpc" "pihole_vpc" {
  cidr_block           = local.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-vpc"
  })
}

# Create private subnets (no public IP assignment)
resource "aws_subnet" "private_subnets" {
  count = length(local.private_subnets)

  vpc_id            = aws_vpc.pihole_vpc.id
  cidr_block        = local.private_subnets[count.index].cidr
  availability_zone = local.private_subnets[count.index].az

  # Critical: No public IPs
  map_public_ip_on_launch = false

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-private-subnet-${count.index + 1}"
    Type = "private"
  })
}

# VPC Endpoints for AWS services (avoid NAT Gateway costs)
resource "aws_vpc_endpoint" "ssm" {
  vpc_id              = aws_vpc.pihole_vpc.id
  service_name        = "com.amazonaws.${var.aws_region}.ssm"
  vpc_endpoint_type   = "Interface"
  subnet_ids          = aws_subnet.private_subnets[*].id
  security_group_ids  = [aws_security_group.vpc_endpoints.id]
  private_dns_enabled = true

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-ssm-endpoint"
  })
}

# S3 Gateway Endpoint (for Docker images, packages)
resource "aws_vpc_endpoint" "s3" {
  vpc_id            = aws_vpc.pihole_vpc.id
  service_name      = "com.amazonaws.${var.aws_region}.s3"
  vpc_endpoint_type = "Gateway"
  route_table_ids   = [aws_route_table.private_rt.id]

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-s3-endpoint"
  })
}
security_groups.tf — Network Security
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Security group for Pi-hole EC2 instance
resource "aws_security_group" "pihole_instance" {
  name_prefix = "${local.name_prefix}-instance-"
  vpc_id      = aws_vpc.pihole_vpc.id
  description = "Security group for Pi-hole EC2 instance - Tailscale handles access control"

  # No inbound rules needed!
  # Tailscale creates encrypted tunnels and handles all access control
  # The Tailscale agent on the instance will accept connections through the mesh network

  # All outbound traffic allowed (needed for package updates, Docker pulls, etc.)
  egress {
    description = "All outbound traffic"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-instance-sg"
  })
}

The heart of our deployment is the EC2 instance that runs Pi-hole and Unbound containers via cloud-init automation.

ec2.tf — Instance & Cloud-Init Configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# Get latest Amazon Linux 2023 ARM64 AMI
data "aws_ami" "amazon_linux_arm64" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["al2023-ami-*-arm64"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# EC2 instance for Pi-hole DNS resolver
resource "aws_instance" "pihole" {
  ami                     = data.aws_ami.amazon_linux_arm64.id
  instance_type           = var.instance_type
  subnet_id               = aws_subnet.private_subnets[0].id
  vpc_security_group_ids  = [aws_security_group.pihole_instance.id]
  iam_instance_profile    = aws_iam_instance_profile.pihole_instance_profile.name

  # Critical: No public IP
  associate_public_ip_address = false

  # Root volume configuration
  root_block_device {
    volume_type           = "gp3"
    volume_size           = var.root_volume_size
    delete_on_termination = true
    encrypted             = true
  }

  # IMDSv2 only for security
  metadata_options {
    http_endpoint               = "enabled"
    http_tokens                 = "required"
    http_put_response_hop_limit = 1
  }

  # Cloud-init handles all configuration
  user_data = base64encode(templatefile("${path.module}/templates/cloud-init.yaml", {
    aws_region                   = var.aws_region
    tailscale_auth_key_ssm_path = var.tailscale_auth_key_ssm_path
    pihole_admin_password       = var.pihole_admin_password
    timezone                    = var.timezone
    log_group_pihole           = aws_cloudwatch_log_group.pihole_logs.name
    log_group_unbound          = aws_cloudwatch_log_group.unbound_logs.name
  }))

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-dns-resolver"
  })
}
cloudwatch.tf — Logging & Monitoring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# CloudWatch Log Groups with cost-optimized retention
resource "aws_cloudwatch_log_group" "pihole_logs" {
  name              = "/aws/ec2/pihole"
  retention_in_days = var.log_retention_days

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-logs"
  })
}

resource "aws_cloudwatch_log_group" "unbound_logs" {
  name              = "/aws/ec2/unbound"
  retention_in_days = var.log_retention_days

  tags = merge(local.common_tags, {
    Name = "${local.name_prefix}-unbound-logs"
  })
}

# Metric filter for blocked queries
resource "aws_cloudwatch_log_metric_filter" "pihole_blocked_queries" {
  name           = "${local.name_prefix}-blocked-queries"
  log_group_name = aws_cloudwatch_log_group.pihole_logs.name
  pattern        = "[timestamp, request_type=\"query\", client_ip, domain, query_type, reply_type=\"blocked\", ...]"

  metric_transformation {
    name      = "PiholeBlockedQueries"
    namespace = "PiHole/DNS"
    value     = "1"
  }
}

# CloudWatch Dashboard
resource "aws_cloudwatch_dashboard" "pihole_dashboard" {
  dashboard_name = "${local.name_prefix}-dns-monitoring"

  dashboard_body = jsonencode({
    widgets = [
      {
        type   = "metric"
        x      = 0
        y      = 0
        width  = 12
        height = 6

        properties = {
          metrics = [
            ["PiHole/DNS", "PiholeTotalQueries"],
            [".", "PiholeBlockedQueries"]
          ]
          view    = "timeSeries"
          region  = var.aws_region
          title   = "DNS Query Volume"
        }
      }
    ]
  })
}

Key cloud-init tasks:

  1. Install Docker & Tailscale - Latest versions from official sources
  2. Retrieve Tailscale auth key - Securely from SSM Parameter Store
  3. Configure containers - Pi-hole + Unbound via Docker Compose
  4. Set up logging - CloudWatch agent for centralized logs
  5. Create systemd services - Auto-start containers on boot
templates/cloud-init.yaml — Complete Automation (Excerpt)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
#cloud-config
timezone: ${timezone}

package_update: true
packages:
  - docker
  - awscli

runcmd:
  # Install Docker Compose
  - curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - chmod +x /usr/local/bin/docker-compose

  # Install Tailscale
  - curl -fsSL https://tailscale.com/install.sh | sh

  # Get auth key and connect
  - |
    TAILSCALE_KEY=$(aws ssm get-parameter --name "${tailscale_auth_key_ssm_path}" --with-decryption --region "${aws_region}" --query 'Parameter.Value' --output text)
    tailscale up --authkey="$TAILSCALE_KEY" --hostname="pihole-dns" --ssh --reset

  # Create directories
  - mkdir -p /opt/pihole /var/lib/pihole /var/lib/unbound /etc/unbound

write_files:
  # Docker Compose configuration
  - path: /opt/pihole/docker-compose.yml
    content: |
      version: '3.8'
      services:
        unbound:
          image: mvance/unbound:latest
          container_name: unbound
          restart: unless-stopped
          ports:
            - "5335:53/tcp"
            - "5335:53/udp"
          networks:
            - pihole_network

        pihole:
          image: pihole/pihole:latest
          container_name: pihole
          restart: unless-stopped
          ports:
            - "53:53/tcp"
            - "53:53/udp"
            - "80:80/tcp"
          environment:
            TZ: '${timezone}'
            WEBPASSWORD: '${pihole_admin_password}'
            PIHOLE_DNS_: '172.20.0.2#5335'
            DNSSEC: 'true'
          volumes:
            - /var/lib/pihole:/etc/pihole
          networks:
            pihole_network:
              ipv4_address: 172.20.0.3
          depends_on:
            - unbound

      networks:
        pihole_network:
          driver: bridge
          ipam:
            config:
              - subnet: 172.20.0.0/24

Cost Snapshot

Private DNS for Under $8/Month

~$8/month
Complete DNS infrastructure with ad-blocking

Monthly cost breakdown:

  • EC2 t4g.nano (ARM): $3.80
  • EBS Storage (8GB gp3): $0.80
  • CloudWatch Logs (7-day retention): $0.50
  • VPC Endpoints (3x Interface): $2.70
  • Data Transfer: $0.20
  • Total: ~$8.00/month

Cost Note: This assumes light personal usage. VPC endpoints eliminate NAT Gateway costs ($45/month) while providing secure AWS service access.

Cost Optimization Tips:

  • Use t4g.nano for personal use (upgrade to t4g.micro if needed)
  • Keep log retention at 7 days for cost control
  • Monitor CloudWatch costs in billing dashboard
  • Consider EBS snapshots for additional backup ($0.40/month)

Security Hardening & Ops

Access Control:

  • 🔒 No public admin interface - Pi-hole web UI only via Tailscale
  • 🔒 SSM Session Manager - emergency access without SSH keys
  • 🔒 Least privilege IAM - minimal permissions for operation
  • 🔒 Private subnets only - no direct internet access

Operational Security:

1
2
3
4
5
6
7
8
# Lock resolv.conf to prevent overwrites
sudo chattr +i /etc/resolv.conf

# Monitor container health
docker ps --format "table \t\t"

# Check Pi-hole logs via CloudWatch
aws logs tail /aws/ec2/pihole --follow

Backup Strategy:

  • Terraform state - store in S3 with versioning
  • Pi-hole configuration - persistent volumes on EBS
  • EBS snapshots - automated via AWS Backup (optional)

Patching Strategy:

1
2
3
# Refresh instance with latest AMI
terraform taint aws_instance.pihole
terraform apply

Troubleshooting

Common Issues:

Tailscale Connected but DNS Failing:

1
2
3
4
5
6
7
8
# Check Tailscale status
sudo tailscale status

# Verify Pi-hole container health
docker logs pihole

# Test DNS resolution
dig @<tailscale-ip> google.com

SERVFAIL/Timeout Errors:

1
2
3
4
5
6
7
8
# Increase Unbound verbosity temporarily
docker exec unbound unbound-control verbosity 2

# Check root hints freshness
docker exec unbound ls -la /opt/unbound/etc/unbound/root.hints

# Verify egress connectivity
docker exec unbound dig @1.1.1.1 google.com

CloudWatch Log Costs:

  • Reduce Pi-hole log level in docker-compose.yml
  • Decrease CloudWatch retention period
  • Use log metric filters for alerting instead of full log retention

VPC Endpoints Not Reachable:

1
2
3
4
5
# Check route tables
aws ec2 describe-route-tables --filters "Name=vpc-id,Values=<vpc-id>"

# Verify endpoint DNS resolution
nslookup ssm.us-east-1.amazonaws.com

5 Deploy Your Private DNS Infrastructure

Step-by-Step Deployment:

  1. Clone the repository:
    1
    2
    
    git clone https://github.com/gpayne9/guydevops.com.git
    cd terraform/post2
    
  2. Initialize Terraform:
    1
    
    terraform init
    
  3. Review the deployment plan:
    1
    
    terraform plan -var="aws_region=us-east-1"
    
  4. Deploy the infrastructure:
    1
    
    terraform apply -var="aws_region=us-east-1"
    

Deployment time: ~5-10 minutes for complete infrastructure + container setup

Configure Tailscale DNS Settings:

  1. Go to your Tailscale admin consoleDNS
  2. Add custom nameserver: Use your EC2 instance’s Tailscale IP
  3. Enable MagicDNS (optional but recommended)
  4. Test DNS resolution: nslookup google.com should show your Pi-hole

Access Pi-hole Admin Interface:

  1. Connect to Tailscale on your device
  2. Navigate to: http://<ec2-tailscale-ip>/admin
  3. Login with the password from Terraform outputs
  4. Customize blocklists and monitor DNS queries

Wrapping Up

That's a Wrap!

Your private DNS resolver is now running on AWS, blocking ads and trackers across all your devices for under $8/month.

What We Built:

You’ve deployed a complete private DNS infrastructure that provides:

  • Network-wide ad blocking — Pi-hole filters malicious domains for all devices
  • Privacy-first DNS — Unbound recursive resolver keeps queries private
  • Secure remote access — Tailscale mesh networking without complex VPN setup
  • Infrastructure as Code — Everything reproducible with terraform apply
  • Cost-optimized hosting — Under $8/month with ARM Graviton2 instances
  • Enterprise monitoring — CloudWatch logs and metrics for observability

Next Steps

  • Custom blocklists: Add domain-specific filters in Pi-hole admin
  • Monitoring alerts: Set up CloudWatch alarms for high query volumes
  • Backup strategy: Consider EBS snapshots for Pi-hole configuration
  • Performance tuning: Monitor resource usage and scale to t4g.micro if needed
  • Security hardening: Review Tailscale ACLs and enable MFA

Troubleshooting

  • DNS not working: Check Tailscale connection and Pi-hole upstream configuration
  • High costs: Review CloudWatch log retention and VPC endpoint usage
  • Container issues: Use SSM Session Manager to access instance directly
  • Performance problems: Monitor CloudWatch metrics and consider instance upgrade

Cleanup

1
2
# Remove all resources when no longer needed
terraform destroy

Questions or run into issues? Hit me up on GitHub or LinkedIn.

This post is licensed under CC BY 4.0 by the author.