AWS’s new free plan comes with $200 in credits ($100 base + $100 bonus) — but only if you activate each activity individually. Most people don’t. I almost didn’t.
The first time I spun up an ElastiCache cluster for a personal project, I picked cache.t4g.micro — the Graviton node type, which seemed like the obvious modern choice. It isn’t free-plan eligible. By the time I caught it, I’d burned a few days of charges on a cache I wasn’t even actively using. That’s the kind of mistake this module is designed to prevent.
I built a Terraform module that satisfies four of the five credit-earning activities automatically, provisions 20+ services, and makes it structurally difficult to misconfigure yourself into charges. It’s published on the Terraform Registry as cloudplz/free-tier/aws.
The Problem with “Just Use Free Tier”
Navigating AWS free tier without Terraform is a game of reading footnotes. A few examples of what trips people up:
cache.t4g.microis not free.cache.t3.microis. The types look similar; the Graviton one charges you.- DynamoDB on-demand billing eats the free tier. The 25 RCU/WCU Always Free allocation only applies to provisioned capacity mode. Switch to on-demand and you’re paying per request.
- Public IPv4 addresses cost $3.65/month since the 2024 pricing change. This is per IP, per resource. A single EC2 instance burns $3.65/month before you run a single process.
- KMS encryption isn’t free. SSE-S3 is. If you default to KMS on your S3 buckets (which many security guides recommend), you pay per API call.
The module encodes all of these as validated constraints. You can’t accidentally configure a non-free ElastiCache node type — the validation block rejects it at plan time before anything gets created.
What the Module Provisions
The module splits resources into two categories: core services that always deploy, and optional services you toggle with a features object.
Core services (always on):
| Resource | Configuration | Why This Specific Setting |
|---|---|---|
| VPC | /16 CIDR, public + private subnets | No NAT gateway — saves ~$32/mo |
| EC2 | t4g.micro, gp3 30GB | t-family enforced by validation |
| Public IPv4 | 1 address on EC2 | Unavoidable; $3.65/mo |
| Lambda | 128MB, Function URL + API Gateway | 128MB maximizes free GB-seconds |
| DynamoDB | PROVISIONED 25 RCU/WCU | On-demand would bypass Always Free |
| SQS | Standard queue + DLQ | FIFO burns requests faster |
| SNS | Standard topic | 1M publishes/mo free |
| CloudWatch | 2 alarms, 7-day retention | Stays under 10-alarm Always Free limit |
| EventBridge | Scheduler at rate(5 min) | 14M Scheduler invocations/mo free |
| Budgets | Zero-spend alert | Earns a $20 credit activity |
| Secrets Manager | Credentials for enabled databases | $0.40/secret/mo — intentionally core |
| IAM | Roles for EC2, Lambda, Step Functions | Required for least-privilege access |
Secrets Manager is a deliberate choice. Database credentials belong in a secrets store, not in Terraform state or environment variables. At $0.40/secret/month it’s the only non-free always-on cost that’s a deliberate tradeoff — unlike the IPv4 address, which is unavoidable.
Optional services (feature toggles):
| Feature | What It Creates | Monthly Cost |
|---|---|---|
rds = true | RDS PostgreSQL db.t4g.micro, 20GB | ~$13.98 |
aurora = true ⚠️ | Aurora Serverless v2, 0.5–4 ACUs | Always Free since March 2026 — requires Paid Plan |
elasticache = true | ElastiCache Valkey cache.t3.micro | ~$12.41 |
cloudfront = true | CloudFront PriceClass_100 + S3 origin | Always Free |
cognito = true | Cognito User Pool | Always Free (10K MAU) |
step_functions = true | Step Functions STANDARD state machine | Always Free (4K transitions/mo) |
bedrock_logging = true | Bedrock invocation logging to CloudWatch | Always Free |
Aurora becoming Always Free in March 2026 was a meaningful change — a serverless PostgreSQL cluster at no cost as long as you stay under 4 ACUs and 1 GiB of storage. The module caps both by default. Aurora Serverless v2 scales to 0 ACUs when idle, so the real-world cost at low usage is effectively $0.
The Five Credit-Earning Activities
AWS pays out $20 for each of five activities, totaling $100 in bonus credits on top of your $100 base. Four require nothing beyond terraform apply. One needs a single console action:
| Activity | What Terraform Provisions | Manual Step |
|---|---|---|
| EC2 | aws_instance.web (t4g.micro) | None |
| RDS | aws_db_instance.postgres (db.t4g.micro) | None |
| Lambda | aws_lambda_function_url.handler | None — URL is the trigger |
| Budgets | aws_budgets_budget.zero_spend | None |
| Bedrock | Invocation logging config | Enable model access + 1 prompt in Console |
Run terraform apply, go to the Bedrock console, enable any model, send one prompt. You’ve earned $100 in bonus credits in under five minutes.
Credit Budget Math
With $200 total ($100 base + $100 bonus), here’s how long it lasts under different configurations. Aurora is excluded from the monthly burn — at idle it consumes 0 ACUs, so its real cost is negligible until you’re actively querying it.
| Scenario | Monthly Burn | $200 Lasts |
|---|---|---|
| All defaults (RDS + ElastiCache + Secrets Manager) | ~$39.79 | ~5 months |
| Disable RDS after earning $20 credit | ~$25.41 | ~7.9 months |
| Disable RDS + ElastiCache | ~$12.60 | ~15.9 months |
The cost-optimized strategy: run with all defaults for the first month to earn all five $20 credits, then disable RDS and ElastiCache. You’d stretch $200 to nearly 16 months on the remaining compute and networking costs alone.
Design Decisions Worth Understanding
Validation as Cost Guards
This is the core value proposition of the module over a plain README.md of instance type recommendations. validation blocks make it structurally difficult to misconfigure yourself into charges:
variable "ec2_instance_type" {
default = "t4g.micro"
validation {
condition = can(regex("^t[0-9]+[a-z]*\\.", var.ec2_instance_type))
error_message = "ec2_instance_type must be a t-family instance type (e.g., t4g.micro, t3.micro)."
}
}
variable "elasticache_node_type" {
default = "cache.t3.micro"
validation {
condition = var.elasticache_node_type == "cache.t3.micro"
error_message = "elasticache_node_type must be cache.t3.micro (cache.t4g.micro is NOT free-plan eligible)."
}
}
variable "aurora_max_capacity" {
default = 4.0
validation {
condition = var.aurora_max_capacity <= 4.0
error_message = "aurora_max_capacity must be <= 4.0 to stay within the free plan cap."
}
}
The ElastiCache message is deliberately explicit. cache.t4g.micro looks like the Graviton equivalent of cache.t3.micro. It isn’t free-plan eligible. Without the validation: $0.017/hr from the moment the cluster comes up, no warning.
Feature Toggles with Optional Object Type
The features variable uses Terraform’s optional() type constraint, which lets callers omit any combination of keys without triggering an error:
variable "features" {
type = object({
rds = optional(bool, true)
aurora = optional(bool, true)
elasticache = optional(bool, true)
cloudfront = optional(bool, true)
cognito = optional(bool, true)
step_functions = optional(bool, true)
bedrock_logging = optional(bool, true)
})
default = {}
}
default = {} means the caller can omit the block entirely — all features default to on. This is friendlier than the common pattern of a map(bool) where you’re forced to list every key you want. With optional(), you only declare what you’re changing:
# Disable only the expensive ones
features = {
rds = false
elasticache = false
}
There’s one cross-variable constraint worth noting: ElastiCache needs a DB subnet group, which only exists when RDS or Aurora is also enabled. That’s enforced at plan time:
validation {
condition = !var.features.elasticache || var.features.rds || var.features.aurora
error_message = "ElastiCache requires a DB subnet group; enable features.rds or features.aurora."
}
No NAT Gateway
The VPC has public and private subnets but no NAT gateway. This saves ~$32/month ($0.045/hr). Resources in private subnets that need outbound internet access use VPC endpoints for AWS services (S3, Secrets Manager, SSM), or they don’t need it at all. EC2 is in the public subnet — fine for a personal learning environment where you can lock down SSH via security groups, or skip SSH entirely in favor of SSM Session Manager.
DynamoDB PROVISIONED vs On-Demand
The Always Free DynamoDB allocation — 25 RCU, 25 WCU, 25 GB — only applies to provisioned capacity mode. On-demand pricing doesn’t participate in the Always Free tier. The module creates a provisioned table at exactly 25/25 by default. Changing to on-demand would immediately start burning credits on every read and write.
Publishing to the Terraform Registry
Terraform Registry module names follow a strict convention: terraform-<PROVIDER>-<NAME>. The GitHub repo cloudplz/terraform-aws-free-tier automatically maps to cloudplz/free-tier/aws on the registry. There’s no application process — connect your GitHub account to the registry, select the repository, and the registry picks up releases tagged with semantic version numbers (v1.0.0, v1.1.0, etc.).
The CI pipeline runs on every push and PR:
# .github/workflows/ci.yml (abbreviated)
- terraform fmt -check -recursive
- terraform validate
- tflint --recursive
- trivy config .
- terraform test
terraform test runs the unit tests in tests/. The tests validate variable defaults, feature toggle behavior, security constraints (no public S3 buckets, no unencrypted storage), and cross-variable validations — without creating any real infrastructure.
5 Lines of HCL
After publishing the module, using it is as minimal as Terraform gets. name is the only required input — everything else has a safe default. The contrast with inlining all 70+ resources is stark: 500+ lines of HCL you’d need to maintain and update whenever AWS adjusts pricing or service behavior. With the published module, a version = "~> 1.0" bump picks up fixes automatically.
module "free_tier" {
source = "cloudplz/free-tier/aws"
version = "~> 1.0"
name = "myproject"
}
- Registry: registry.terraform.io/modules/cloudplz/free-tier/aws
- Source: github.com/cloudplz/terraform-aws-free-tier
Found a service that should be Always Free but isn’t handled correctly? Open an issue — the cost tables drift as AWS adjusts limits.