Reading time: 16 minutes
Table of Contents
- A Day in the Life: Before and After Terragrunt
- What Is Terragrunt?
- Why Vanilla Terraform Gets Messy Fast
- How Terragrunt Solves Multi-Account AWS Management
- Real-World Example: Before and After
- When You Actually Need Terragrunt
- Common Gotchas and How to Avoid Them
- FAQ
- Sources
Managing Terraform across multiple AWS accounts feels like fighting the tool instead of using it. You’re copy-pasting backend configurations, manually creating S3 buckets, and praying you don’t accidentally deploy dev infrastructure to production.
After managing infrastructure for 50+ client environments across multiple AWS accounts, I’ve learned that vanilla Terraform wasn’t designed for this scale. It works fine for a single account with a couple of environments—but once you’re juggling 10+ accounts with dev/staging/prod in each, the copy-paste hell becomes untenable.
It becomes untenable fast.
Terragrunt is a thin wrapper around Terraform that adds automatic state management, DRY configuration, and built-in multi-account support. It eliminates backend configuration duplication, automatically creates S3 buckets and DynamoDB tables for state locking, and handles AWS role assumption without manual credential juggling. For teams managing 3+ AWS accounts, it typically reduces deployment setup time from days to hours.
⚠️ CRITICAL COMPATIBILITY NOTE
If you’re using Terraform Cloud or Terraform Enterprise, stop here. Terragrunt is incompatible with TFC/TFE workflows. Terraform Cloud already provides its own remote state management, workspace handling, and backend configuration—Terragrunt’s core value propositions. Attempting to run Terragrunt locally and push to TFC defeats the purpose of both tools. This article is for teams using vanilla Terraform or OpenTofu with self-managed state backends.
A Day in the Life: Before and After Terragrunt
Monday morning without Terragrunt:
Your PM wants a new staging environment for ClientCorp in us-west-2. You spend 20 minutes copying backend.tf and providers.tf from the existing us-east-1 setup, manually editing bucket names, region strings, and account IDs. You fat-finger the DynamoDB table name—clientcorp-staging-locks instead of clientcorp-staging-terraform-locks—but don’t notice yet.
You run terraform init. It fails. The S3 bucket doesn’t exist.
Right. You open the AWS console, manually create clientcorp-staging-terraform-state-us-west-2, enable versioning, add encryption tags. Then create the DynamoDB table. Realize you typo’d the table name in the backend config. Fix it. Re-run init.
It works. You run terraform apply. It asks for AWS credentials.
You remember you need to assume the ClientCorp deployment role. You dig through your notes to find the role ARN, run aws sts assume-role, copy the credentials from the JSON output, export them as environment variables. The session expires in an hour, so you’d better work fast.
Two hours have passed. You haven’t deployed anything yet.
Monday morning with Terragrunt:
Your PM wants a new staging environment for ClientCorp in us-west-2. You cd into accounts/clientcorp, create a new us-west-2 directory, copy the existing region.hcl file, change one line: region = "us-west-2".
You run terragrunt run-all apply from the root.
Terragrunt creates the S3 bucket automatically. Creates the DynamoDB table. Assumes the IAM role. Generates the backend config. Deploys your VPC, security groups, and RDS instance in dependency order.
Fifteen minutes. Done.
You get coffee and answer Slack messages while it runs.
What Is Terragrunt?
Terragrunt is an open-source orchestration tool (Apache 2.0 license) that wraps Terraform to solve configuration repetition and state management complexity. Unlike Terraform’s workspaces—which HashiCorp explicitly doesn’t recommend for multi-environment management—Terragrunt provides environment isolation through separate state files while keeping your code DRY through inheritance.
Here’s what it’s not: a Terraform replacement.
You’re still writing .tf files. Terragrunt just adds a thin terragrunt.hcl layer that handles the boring, error-prone stuff like backend setup and variable passing.
The tool came from Gruntwork (infrastructure-as-code consultancy) after they spent years dealing with the same copy-paste problems across client projects. It’s been actively maintained since 2016, with version compatibility for Terraform 0.12+, 1.x, and OpenTofu.
Why Vanilla Terraform Gets Messy Fast
Let me show you the actual problems you hit when managing multiple AWS accounts with pure Terraform. You know these pain points if you’ve been there.
Backend Configuration Hell
Terraform’s backend configuration doesn’t support variables, expressions, or functions. This means for every environment in every account, you’re writing this:
# accounts/customer-a/dev/main.tf
terraform {
backend "s3" {
bucket = "customer-a-dev-terraform-state"
key = "vpc/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "customer-a-dev-terraform-locks"
encrypt = true
}
}Then copying it to:
# accounts/customer-a/staging/main.tf
terraform {
backend "s3" {
bucket = "customer-a-staging-terraform-state" # Changed
key = "vpc/terraform.tfstate"
dynamodb_table = "customer-a-staging-terraform-locks" # Changed
encrypt = true
}
}And again for production. And again for customer B. And again for customer C.
Think about maintaining that across 50 accounts. I’ll wait.
Now you’ve got 30+ identical backend blocks with slightly different names. When you want to add a new S3 bucket parameter? You’re editing 30 files. Miss one and you’ve got inconsistent state management across your infrastructure.
Manual State Storage Setup
Before you can run terraform init, you need to:
- Manually create the S3 bucket with the exact name from your backend config
- Enable versioning on that bucket
- Create a DynamoDB table for state locking with a primary key named
LockID - Set up appropriate IAM permissions
- Remember to do all of this for every. single. environment.
Every single one.
According to Spacelift’s analysis, state management in Terraform is “a sensitive topic” when it comes to multiple environments—if all environments share the same state file, there’s significant risk of conflicts and accidental overwrites.
I’ve seen teams spend 7+ days setting up a new customer account because they’re manually provisioning all this state infrastructure. One case study from a team managing multi-account AWS reported cutting new environment onboarding from 7 days to 1 hour after switching to Terragrunt’s automatic backend creation—though your mileage will vary depending on how complex your environment setup is.
Multi-Account Authentication Complexity
The most secure way to manage AWS infrastructure is to segment it between multiple AWS accounts—AWS themselves recommend this in their best practices. But Terraform doesn’t understand AWS SSO sessions or profile chaining inside .aws/config.
Your options with vanilla Terraform? None of them great:
- Manually assume roles before each run using AWS CLI, export credentials as environment variables, then run Terraform. Sessions expire after an hour, so you get to do this dance repeatedly throughout the day.
- Use aws-vault or similar credential management tools—which means adding another dependency to your stack and training your team on yet another tool.
- Configure assume_role in the provider block for each account. This works until you realize all runs must use the same IAM role, which becomes problematic fast when different users need different access levels.
As noted in Hector’s comprehensive multi-account authentication guide, “the downside to managing role assumption with the AWS provider is that all runs must be performed with the same IAM role.”
Workspace Limitations
Terraform’s built-in workspaces seem like they’d solve this—but they don’t. Even HashiCorp doesn’t recommend using workspaces as the sole solution for managing environments.
Here’s where workspaces fall apart:
- All workspaces share the same backend — zero isolation between environments. Your dev and prod state files live in the same bucket.
- Navigation nightmare — try figuring out what’s deployed where across 50 workspaces without a spreadsheet.
- No configuration flexibility — you can’t use different instance types for dev and prod. Same config, different workspace name.
- Deployment roulette — accidentally deploying to prod when you thought you were in the dev workspace is a resume-generating event.
- Single backend constraint — can’t use different S3 buckets or regions per environment.
According to Spacelift’s comparison, workspaces are “good for local development and testing purposes, but not recommended for production-grade deployments.”
Module Dependency Nightmares
In vanilla Terraform, you can only share data between modules using terraform_remote_state. This creates tight coupling—your VPC module needs to know the exact S3 bucket and state file path of your networking module. Change your state configuration? Update all the remote state references.
You know what’s wild? You can’t run terraform plan on a module that depends on another module’s outputs unless the dependency has already been applied. This breaks validation in CI/CD when you’re testing fresh infrastructure. Your pipeline needs actual deployed infrastructure to validate configurations for new infrastructure. Think about that for a second.
How Terragrunt Solves Multi-Account AWS Management
Here’s what Terragrunt actually does (and how it makes multi-account management sane).
Automatic State Management
Terragrunt’s remote_state block handles everything automatically. One configuration:
# terragrunt.hcl (root level)
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
config = {
bucket = "my-terraform-state-${local.account_name}-${local.aws_region}"
key = "${path_relative_to_include()}/terraform.tfstate"
region = local.aws_region
encrypt = true
dynamodb_table = "my-lock-table"
}
}When you run terragrunt init or terragrunt apply, Terragrunt:
- Checks if the S3 bucket exists — if not, creates it automatically with versioning enabled
- Checks if the DynamoDB table exists — if not, creates it with the correct
LockIDprimary key - Generates a
backend.tffile in your module directory with the interpolated values - Runs
terraform initwith the correct backend configuration
According to Terragrunt’s official documentation, this automatic creation is supported for S3 and GCS backends. The Gruntwork blog confirms this eliminates “the bootstrapping problem of how to create and manage the underlying storage resources.”
No more manual bucket creation. No more “forgot to enable versioning” incidents. No more copying backend blocks.
| Manual Terraform (7+ steps) | Terragrunt (1 command) | |
|---|---|---|
| Step 1 | Create S3 bucket in AWS console | terragrunt init or terragrunt apply |
| Step 2 | Enable bucket versioning | Checks if S3 bucket exists — creates it if not |
| Step 3 | Create DynamoDB table | Checks if DynamoDB table exists — creates it if not |
| Step 4 | Set LockID primary key | Generates backend.tf automatically |
| Step 5 | Configure IAM permissions | Runs terraform init with correct config |
| Step 6 | Write backend.tf manually | Done. |
| Step 7 | Run terraform init | |
| Time | 30-60 minutes per environment | 2 minutes per environment |
| Error risk | High — typos in bucket names, forgotten versioning, wrong table keys | Low — deterministic, repeatable automation |
DRY Configuration with Include Blocks
Terragrunt’s include block lets you define common configuration once and inherit it everywhere:
# Root terragrunt.hcl
remote_state { ... } # Defined once
locals {
account_vars = read_terraformvars_file("account.hcl")
region_vars = read_terraformvars_file("region.hcl")
env_vars = read_terraformvars_file("env.hcl")
}
inputs = merge(
local.account_vars.locals,
local.region_vars.locals,
local.env_vars.locals,
)Then in each environment:
# accounts/customer-a/us-east-1/prod/vpc/terragrunt.hcl
include "root" {
path = find_in_parent_folders()
}
terraform {
source = "tfr:///terraform-aws-modules/vpc/aws?version=5.1.2"
}
inputs = {
name = "customer-a-prod-vpc"
cidr = "10.0.0.0/16"
}That’s it. The backend configuration, region settings, account details—all inherited from the root terragrunt.hcl. When you need to update something globally (like adding a new S3 tag), you change it in one place.
One place. Not 47 files scattered across your directory tree.
According to Medium’s detailed Terragrunt guide, “Terragrunt offers built-in support for DRY configurations using its include and dependency blocks, significantly reducing boilerplate and improving maintainability.”
In my experience managing 50+ client environments, the code reduction is substantial—I’ve seen projects go from 2,000+ lines of Terraform to approximately 500 lines of Terragrunt configuration, though this ratio varies based on how much duplication your original setup had.
Multi-Account Authentication Made Simple
Terragrunt handles AWS role assumption automatically with the --terragrunt-iam-role flag or iam_role configuration:
# accounts/customer-a/account.hcl
locals {
account_id = "123456789012"
iam_role = "arn:aws:iam::123456789012:role/TerraformDeployRole"
}# Root terragrunt.hcl
iam_role = local.account_vars.locals.iam_roleTerragrunt calls the sts:AssumeRole API on your behalf and exposes credentials as environment variables when running Terraform. According to Terragrunt’s authentication documentation, this provides “fresh credentials on every run without the complexity of calling assume-role yourself.”
No more manual credential juggling. No need for aws-vault. No credential files written to disk in plaintext.
With Terragrunt (automatic, every run):
- You run
terragrunt apply- Terragrunt reads
iam_rolefromaccount.hcl- Terragrunt calls AWS STS
AssumeRoleAPI- STS returns temporary credentials
- Terragrunt sets
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_SESSION_TOKEN- Terraform executes with the assumed role
- Infrastructure deployed. You didn’t touch a credential.
Without Terragrunt (manual, expires every hour):
- Run
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/Deploy- Copy
AccessKeyId,SecretAccessKey,SessionTokenfrom JSON outputexport AWS_ACCESS_KEY_ID=AKIA...export AWS_SECRET_ACCESS_KEY=wJalr...export AWS_SESSION_TOKEN=FwoGZX...- Run
terraform apply— quickly, you have 60 minutes- Credentials expire mid-apply? Start over from step 1
Dependency Management Between Modules
Terragrunt’s dependency block creates explicit dependencies between modules and passes outputs automatically:
# database/terragrunt.hcl
dependency "vpc" {
config_path = "../vpc"
mock_outputs = {
vpc_id = "vpc-12345678"
subnet_ids = ["subnet-12345678", "subnet-87654321"]
}
mock_outputs_allowed_terraform_commands = ["validate", "plan"]
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id
subnet_ids = dependency.vpc.outputs.subnet_ids
}The mock_outputs attribute solves a critical problem: you can run terragrunt plan on a fresh setup where nothing’s deployed yet. According to Medium’s dependency guide, “Terragrunt will return an error if the unit referenced in a dependency block has not been applied yet… mock outputs correspond to a map that will be injected in place of actual dependency outputs.”
When you run terragrunt run-all apply, Terragrunt builds the dependency graph and deploys modules in the correct order automatically. Your VPC gets created before your database. No more manual sequencing.
| Deploy Order | Module | Depends On | Runs In Parallel With |
|---|---|---|---|
| 1st | VPC | (none) | — |
| 2nd | Security Groups | VPC | — |
| 3rd | RDS Database | VPC + Security Groups | Application Load Balancer |
| 3rd | Application Load Balancer | VPC + Security Groups | RDS Database |
| 4th | ECS Service | ALB + RDS | — |
terragrunt run-all applybuilds this dependency graph automatically from yourdependencyblocks. Independent modules at the same level (like RDS and ALB above) run in parallel. No manual sequencing required.
Real-World Example: Before and After
Let me show you the actual difference managing three environments across two customer accounts.
Before (Vanilla Terraform)
Directory structure:
customer-a/
dev/
backend.tf # 15 lines, manually copied
providers.tf # 25 lines, manually copied
vpc/main.tf
rds/main.tf
staging/
backend.tf # 15 lines, manually copied
providers.tf # 25 lines, manually copied
vpc/main.tf
rds/main.tf
prod/
backend.tf # 15 lines, manually copied
providers.tf # 25 lines, manually copied
vpc/main.tf
rds/main.tf
customer-b/
# Repeat the entire structure...| Metric | Value |
|---|---|
| Total lines of Terraform code | ~1,800 |
| Manually duplicated backend configs | 6 |
| Manually created S3 state buckets | 6 |
| Time to onboard new customer | 5-7 days |
Manual steps for each environment:
- Create S3 bucket with correct naming convention
- Enable bucket versioning (easy to forget)
- Create DynamoDB table for locking with the exact right primary key name
- Assume IAM role manually before each deployment—and remember to re-assume when it expires mid-apply
- Copy backend and provider configs from another environment
- Update all the hardcoded values (bucket names, regions, account IDs)
- Run
terraform initand pray - Discover you typo’d something in step 6, fix it, start over
After (Terragrunt)
Directory structure:
terragrunt.hcl # Root config - 40 lines
accounts/
customer-a/
account.hcl # 5 lines (account_id, iam_role)
us-east-1/
region.hcl # 3 lines (region)
dev/
env.hcl # 5 lines (environment vars)
vpc/terragrunt.hcl # 10 lines
rds/terragrunt.hcl # 15 lines
staging/
env.hcl
vpc/terragrunt.hcl
rds/terragrunt.hcl
prod/
env.hcl
vpc/terragrunt.hcl
rds/terragrunt.hcl
customer-b/
# Same structure, different account.hcl| Metric | Value |
|---|---|
| Total lines of Terragrunt config | ~450 |
| Backend configs | 1 (inherited everywhere) |
| State buckets | Created automatically on first apply |
| Time to onboard new customer | 1-2 hours |
Automated by Terragrunt:
- S3 bucket creation with versioning
- DynamoDB table creation
- IAM role assumption per account
- Backend configuration generation
- Dependency ordering
| Layer | File | What It Defines | Inherited By |
|---|---|---|---|
| Root | terragrunt.hcl | remote_state config, common settings | Everything below |
| Account | account.hcl | account_id, iam_role | All regions/envs in that account |
| Region | region.hcl | region: us-east-1 | All environments in that region |
| Environment | env.hcl | environment: dev/staging/prod | All modules in that env |
| Module | vpc/terragrunt.hcl | Module source + inputs | (leaf node — inherits everything above) |
How inheritance works: Each
terragrunt.hclat the module level callsfind_in_parent_folders()which walks up the directory tree, mergingaccount.hcl+region.hcl+env.hclinto a single configuration. Change the rootremote_stateblock once → every module across every account picks it up automatically.Without Terragrunt: You’d copy-paste that backend config into 30+ files and manually update each one when something changes.
| Vanilla Terraform | Terragrunt | Improvement | |
|---|---|---|---|
| Config lines | ~1,800 | ~450 | 75% reduction |
| Backend configs | 6 duplicated blocks | 1 inherited block | 83% less duplication |
| New customer onboarding | 5-7 days | 1-2 hours | ~90% faster |
| State bucket creation | Manual (per environment) | Automatic | Eliminated entirely |
| IAM role handling | Manual export + hourly refresh | Automatic per account | Eliminated entirely |
In this specific example, that’s a 75% reduction in configuration code and an estimated 90%+ reduction in setup time based on eliminating manual state infrastructure provisioning.
That’s not optimization. That’s elimination of entire categories of work.
According to a case study from AWS Builders, organizations using Terragrunt for multi-account management report similar improvements in deployment velocity and reduced configuration errors.
When You Actually Need Terragrunt
Terragrunt isn’t always the answer (nothing is). Here’s when it makes sense—and when it doesn’t.
Use Terragrunt When:
- Managing 3+ AWS accounts — the setup overhead pays off once you’re juggling multiple accounts. Below 3? You’re probably fine with vanilla Terraform.
- Multiple environments per account — dev/staging/prod multiplied across accounts means lots of duplication
- You’re tired of backend configuration hell — if you’ve copy-pasted
backend "s3"blocks more than 10 times, you need this. If you’ve done it 50+ times, you needed this yesterday. - Team deployments — multiple people deploying infrastructure across accounts benefit from consistent, automated state management
- You value DRY principles — if you maintain Terraform modules and hate duplicating inputs across environments
Skip Terragrunt When:
- Single account with 1-2 environments — vanilla Terraform is fine here, don’t overcomplicate
- You’re just learning Terraform — master the core tool first, then layer on orchestration
- Using Terraform Cloud/Enterprise — these tools provide their own state management and workspace handling
- Your team won’t learn another DSL — Terragrunt adds HCL configuration on top of Terraform’s HCL, which is a learning curve
| Your Situation | Recommendation |
|---|---|
| 1-2 AWS accounts, simple setup | Vanilla Terraform is fine |
| 3+ AWS accounts, multiple envs | Use Terragrunt |
| Using Terraform Cloud/Enterprise | Don’t use Terragrunt (incompatible) |
| Learning Terraform for the first time | Master Terraform first, add Terragrunt later |
| Team of 3+ deploying infrastructure | Use Terragrunt for consistency |
| Running OpenTofu instead of Terraform | Use Terragrunt (fully compatible) |
Don’t use tools you don’t need.
According to Spacelift’s comparison, “Terraform Workspaces are good when you only juggle a handful of modules and environments. Terragrunt tends to pay off as your estate grows—you have more than a small number of stacks or multiple teams managing infrastructure.”
What About Alternatives?
Let me be clear: there are other approaches, but they all have tradeoffs.
| Alternative | Solves Backend Duplication? | Solves State Setup? | Solves Multi-Account Auth? | Ecosystem Size |
|---|---|---|---|---|
| Terraform Workspaces | No — shared backend, zero isolation | No | No | Built-in (limited) |
| Atlantis | No | No | Partial — PR automation | Medium |
| Terraspace | Yes | Yes | Yes | Small (Ruby-based) |
| Terragrunt | Yes | Yes | Yes | Large (Go, since 2016) |
| Manual scripts | Partial — you build it yourself | Partial | Partial | N/A (custom) |
HashiCorp themselves don’t recommend using workspaces for multi-environment management. Atlantis is great for PR automation but doesn’t solve backend duplication. Terraspace has similar goals but a smaller ecosystem. Manual scripting means rebuilding what Terragrunt already does—and you’ll spend months getting it right.
Common Gotchas and How to Avoid Them
After deploying Terragrunt across 50+ environments, here are the mistakes I’ve seen (and made):
1. Confusion Between generate and remote_state Blocks
Use remote_state—it automatically creates S3 buckets and DynamoDB tables. The generate block just creates a backend.tf file without provisioning infrastructure, which defeats the point. According to BTI360’s analysis, “the generate block introduces a bootstrapping problem: how do you create and manage the underlying storage resources?”
2. Overusing Mock Outputs
Here’s where I’ve seen teams shoot themselves in the foot: mock_outputs let you run plan and validate on fresh infrastructure, which is genuinely useful for CI/CD. But they can mask real configuration issues if you’re not careful.
I worked with a team that had mocked VPC outputs (vpc_id, subnet_ids) that didn’t match the actual module’s output structure. Their CI pipeline ran terragrunt plan successfully for weeks using the mocks—everything looked green. Then they tried to deploy a new environment and the apply failed immediately because the real VPC module returned private_subnet_ids and public_subnet_ids as separate outputs, not a combined subnet_ids list.
Best practice: Use mock_outputs_allowed_terraform_commands = ["validate", "plan"] to restrict when mocks are active. Never rely on mocks for actual deployments. Think of them as training wheels for CI validation, not production guardrails.
3. Not Understanding Dependency Execution Order
When you run terragrunt run-all apply, Terragrunt respects dependencies but runs independent modules in parallel by default. Got 10 VPCs across different accounts with no dependencies? Terragrunt tries to create all 10 simultaneously, which sounds great until you hit AWS API rate limits, overwhelm your CI/CD runner, or end up with interleaved logs that are impossible to debug.
Limit parallelism with:
terragrunt run-all apply --terragrunt-parallelism 3I usually set this to 2-3 for large deployments. Yes, it’s slower, but you can actually read the output.
4. Forgetting to Define path_relative_to_include()
This function is non-negotiable for unique state file keys:
key = "${path_relative_to_include()}/terraform.tfstate"Without it, every environment tries to write to the same state file. Ask me how I know. (Spoiler: I spent 3 hours debugging why my dev environment kept deploying prod infrastructure before I realized dev and prod were sharing the same state file key in S3. Every terraform apply in dev was overwriting prod’s state. Do not recommend.)
5. IAM Role Permissions
Your Terragrunt execution role needs permissions to create S3 buckets (with versioning), create DynamoDB tables, assume other IAM roles, and read/write state files. Missing any of these? You’ll get cryptic AccessDenied errors during terragrunt init that don’t tell you which permission is missing. Start with AdministratorAccess in dev to verify your setup works, then lock it down to least-privilege for production.
FAQ
How much does Terragrunt cost compared to vanilla Terraform?
Both are free and open source. Terragrunt is Apache 2.0 licensed (same as Terraform/OpenTofu). Your only cost is the AWS resources—S3 buckets and DynamoDB tables for state management, which you’d be paying for anyway with vanilla Terraform.
Can I migrate existing Terraform projects to Terragrunt?
Yes, and here’s the thing about migration—you don’t have to do it all at once. Start by adding a root terragrunt.hcl with your remote_state configuration, then gradually migrate modules one environment at a time. Terragrunt can work alongside vanilla Terraform since it’s just pointing at your existing .tf files. I’ve successfully run hybrid setups where some environments use Terragrunt while others still use vanilla Terraform during transition periods. The Terragrunt migration guide walks through incremental adoption strategies, but honestly, the path depends on your specific setup. Teams with well-organized directory structures can migrate in a weekend; monolithic “terralith” projects might take weeks to break apart properly.
Does Terragrunt work with Terraform Cloud or Enterprise?
No. Don’t try to make this work. Terraform Cloud expects vanilla Terraform and provides its own state management and workspace features. Running Terragrunt locally to generate configs and then pushing to TFC defeats the purpose of both tools.
What’s the performance impact of the Terragrunt wrapper?
Minimal—we’re talking 1-3 seconds of overhead per run. Terragrunt’s tasks (checking if S3 bucket exists, assuming IAM role, generating backend config) are just AWS API calls that complete quickly. The actual terraform plan and apply operations dominate execution time—if your Terraform run takes 5 minutes, Terragrunt adds maybe 2 seconds to that. I’ve never had a team complain about Terragrunt being slow; they complain about Terraform being slow, which is a different problem entirely.
Can I use Terragrunt with OpenTofu instead of Terraform?
Yes, Terragrunt fully supports OpenTofu. Since OpenTofu is a Terraform fork maintaining backward compatibility, Terragrunt works identically with both. Just install OpenTofu and Terragrunt will detect and use it automatically—no configuration changes needed.
Sources:
- Multi-account AWS deployments with Terragrunt – DEV Community
- We Cut AWS Onboarding from 7 Days to 1 Hour with Terragrunt – DEV Community
- Keep your remote state configuration DRY – Terragrunt
- Terragrunt vs. Terraform – Comparison & When to Use – Spacelift
- Understanding Terraform and Terragrunt: A Detailed Guide – Medium
- Terragrunt: how to keep your Terraform code DRY and maintainable – Gruntwork Blog
- How to Manage Multiple Terraform Environments Efficiently – Spacelift
- Terraform Pain Points – Jonathan Bergknoff
- State Backend – Terragrunt
- Add Automatic Remote State Locking and Configuration to Terraform with Terragrunt – Gruntwork Blog
- Terraform AWS Provider Multi-Account Authentication – Medium
- Authentication – Terragrunt
- Terragrunt dependencies and mock outputs – Medium
- Managing 50 Terraform Environments: Why Teams Switch to Terragrunt – Medium
- Advantages and Limitations of Terragrunt-Managed Backends – BTI360
- Step 6: Breaking the Terralith Further – Terragrunt
