Terragrunt for Multi-Account AWS: Stop Copy-Pasting Backend Configs

Reading time: 16 minutes

Table of Contents


Managing Terraform across multiple AWS accounts feels like fighting the tool instead of using it. You’re copy-pasting backend configurations, manually creating S3 buckets, and praying you don’t accidentally deploy dev infrastructure to production.

After managing infrastructure for 50+ client environments across multiple AWS accounts, I’ve learned that vanilla Terraform wasn’t designed for this scale. It works fine for a single account with a couple of environments—but once you’re juggling 10+ accounts with dev/staging/prod in each, the copy-paste hell becomes untenable.

It becomes untenable fast.

Terragrunt is a thin wrapper around Terraform that adds automatic state management, DRY configuration, and built-in multi-account support. It eliminates backend configuration duplication, automatically creates S3 buckets and DynamoDB tables for state locking, and handles AWS role assumption without manual credential juggling. For teams managing 3+ AWS accounts, it typically reduces deployment setup time from days to hours.

⚠️ CRITICAL COMPATIBILITY NOTE

If you’re using Terraform Cloud or Terraform Enterprise, stop here. Terragrunt is incompatible with TFC/TFE workflows. Terraform Cloud already provides its own remote state management, workspace handling, and backend configuration—Terragrunt’s core value propositions. Attempting to run Terragrunt locally and push to TFC defeats the purpose of both tools. This article is for teams using vanilla Terraform or OpenTofu with self-managed state backends.


A Day in the Life: Before and After Terragrunt

Monday morning without Terragrunt:

Your PM wants a new staging environment for ClientCorp in us-west-2. You spend 20 minutes copying backend.tf and providers.tf from the existing us-east-1 setup, manually editing bucket names, region strings, and account IDs. You fat-finger the DynamoDB table name—clientcorp-staging-locks instead of clientcorp-staging-terraform-locks—but don’t notice yet.

You run terraform init. It fails. The S3 bucket doesn’t exist.

Right. You open the AWS console, manually create clientcorp-staging-terraform-state-us-west-2, enable versioning, add encryption tags. Then create the DynamoDB table. Realize you typo’d the table name in the backend config. Fix it. Re-run init.

It works. You run terraform apply. It asks for AWS credentials.

You remember you need to assume the ClientCorp deployment role. You dig through your notes to find the role ARN, run aws sts assume-role, copy the credentials from the JSON output, export them as environment variables. The session expires in an hour, so you’d better work fast.

Two hours have passed. You haven’t deployed anything yet.

Monday morning with Terragrunt:

Your PM wants a new staging environment for ClientCorp in us-west-2. You cd into accounts/clientcorp, create a new us-west-2 directory, copy the existing region.hcl file, change one line: region = "us-west-2".

You run terragrunt run-all apply from the root.

Terragrunt creates the S3 bucket automatically. Creates the DynamoDB table. Assumes the IAM role. Generates the backend config. Deploys your VPC, security groups, and RDS instance in dependency order.

Fifteen minutes. Done.

You get coffee and answer Slack messages while it runs.


What Is Terragrunt?

Terragrunt is an open-source orchestration tool (Apache 2.0 license) that wraps Terraform to solve configuration repetition and state management complexity. Unlike Terraform’s workspaces—which HashiCorp explicitly doesn’t recommend for multi-environment management—Terragrunt provides environment isolation through separate state files while keeping your code DRY through inheritance.

Here’s what it’s not: a Terraform replacement.

You’re still writing .tf files. Terragrunt just adds a thin terragrunt.hcl layer that handles the boring, error-prone stuff like backend setup and variable passing.

The tool came from Gruntwork (infrastructure-as-code consultancy) after they spent years dealing with the same copy-paste problems across client projects. It’s been actively maintained since 2016, with version compatibility for Terraform 0.12+, 1.x, and OpenTofu.


Why Vanilla Terraform Gets Messy Fast

Let me show you the actual problems you hit when managing multiple AWS accounts with pure Terraform. You know these pain points if you’ve been there.

Backend Configuration Hell

Terraform’s backend configuration doesn’t support variables, expressions, or functions. This means for every environment in every account, you’re writing this:

# accounts/customer-a/dev/main.tf
terraform {
  backend "s3" {
    bucket         = "customer-a-dev-terraform-state"
    key            = "vpc/terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "customer-a-dev-terraform-locks"
    encrypt        = true
  }
}

Then copying it to:

# accounts/customer-a/staging/main.tf
terraform {
  backend "s3" {
    bucket         = "customer-a-staging-terraform-state"  # Changed
    key            = "vpc/terraform.tfstate"
    dynamodb_table = "customer-a-staging-terraform-locks"  # Changed
    encrypt        = true
  }
}

And again for production. And again for customer B. And again for customer C.

Think about maintaining that across 50 accounts. I’ll wait.

Now you’ve got 30+ identical backend blocks with slightly different names. When you want to add a new S3 bucket parameter? You’re editing 30 files. Miss one and you’ve got inconsistent state management across your infrastructure.

Manual State Storage Setup

Before you can run terraform init, you need to:

  1. Manually create the S3 bucket with the exact name from your backend config
  2. Enable versioning on that bucket
  3. Create a DynamoDB table for state locking with a primary key named LockID
  4. Set up appropriate IAM permissions
  5. Remember to do all of this for every. single. environment.

Every single one.

According to Spacelift’s analysis, state management in Terraform is “a sensitive topic” when it comes to multiple environments—if all environments share the same state file, there’s significant risk of conflicts and accidental overwrites.

I’ve seen teams spend 7+ days setting up a new customer account because they’re manually provisioning all this state infrastructure. One case study from a team managing multi-account AWS reported cutting new environment onboarding from 7 days to 1 hour after switching to Terragrunt’s automatic backend creation—though your mileage will vary depending on how complex your environment setup is.

Multi-Account Authentication Complexity

The most secure way to manage AWS infrastructure is to segment it between multiple AWS accounts—AWS themselves recommend this in their best practices. But Terraform doesn’t understand AWS SSO sessions or profile chaining inside .aws/config.

Your options with vanilla Terraform? None of them great:

  1. Manually assume roles before each run using AWS CLI, export credentials as environment variables, then run Terraform. Sessions expire after an hour, so you get to do this dance repeatedly throughout the day.
  2. Use aws-vault or similar credential management tools—which means adding another dependency to your stack and training your team on yet another tool.
  3. Configure assume_role in the provider block for each account. This works until you realize all runs must use the same IAM role, which becomes problematic fast when different users need different access levels.

As noted in Hector’s comprehensive multi-account authentication guide, “the downside to managing role assumption with the AWS provider is that all runs must be performed with the same IAM role.”

Workspace Limitations

Terraform’s built-in workspaces seem like they’d solve this—but they don’t. Even HashiCorp doesn’t recommend using workspaces as the sole solution for managing environments.

Here’s where workspaces fall apart:

  • All workspaces share the same backend — zero isolation between environments. Your dev and prod state files live in the same bucket.
  • Navigation nightmare — try figuring out what’s deployed where across 50 workspaces without a spreadsheet.
  • No configuration flexibility — you can’t use different instance types for dev and prod. Same config, different workspace name.
  • Deployment roulette — accidentally deploying to prod when you thought you were in the dev workspace is a resume-generating event.
  • Single backend constraint — can’t use different S3 buckets or regions per environment.

According to Spacelift’s comparison, workspaces are “good for local development and testing purposes, but not recommended for production-grade deployments.”

Module Dependency Nightmares

In vanilla Terraform, you can only share data between modules using terraform_remote_state. This creates tight coupling—your VPC module needs to know the exact S3 bucket and state file path of your networking module. Change your state configuration? Update all the remote state references.

You know what’s wild? You can’t run terraform plan on a module that depends on another module’s outputs unless the dependency has already been applied. This breaks validation in CI/CD when you’re testing fresh infrastructure. Your pipeline needs actual deployed infrastructure to validate configurations for new infrastructure. Think about that for a second.


How Terragrunt Solves Multi-Account AWS Management

Here’s what Terragrunt actually does (and how it makes multi-account management sane).

Automatic State Management

Terragrunt’s remote_state block handles everything automatically. One configuration:

# terragrunt.hcl (root level)
remote_state {
  backend = "s3"
  generate = {
    path      = "backend.tf"
    if_exists = "overwrite_terragrunt"
  }
  config = {
    bucket         = "my-terraform-state-${local.account_name}-${local.aws_region}"
    key            = "${path_relative_to_include()}/terraform.tfstate"
    region         = local.aws_region
    encrypt        = true
    dynamodb_table = "my-lock-table"
  }
}

When you run terragrunt init or terragrunt apply, Terragrunt:

  1. Checks if the S3 bucket exists — if not, creates it automatically with versioning enabled
  2. Checks if the DynamoDB table exists — if not, creates it with the correct LockID primary key
  3. Generates a backend.tf file in your module directory with the interpolated values
  4. Runs terraform init with the correct backend configuration

According to Terragrunt’s official documentation, this automatic creation is supported for S3 and GCS backends. The Gruntwork blog confirms this eliminates “the bootstrapping problem of how to create and manage the underlying storage resources.”

No more manual bucket creation. No more “forgot to enable versioning” incidents. No more copying backend blocks.

Manual Terraform (7+ steps)Terragrunt (1 command)
Step 1Create S3 bucket in AWS consoleterragrunt init or terragrunt apply
Step 2Enable bucket versioningChecks if S3 bucket exists — creates it if not
Step 3Create DynamoDB tableChecks if DynamoDB table exists — creates it if not
Step 4Set LockID primary keyGenerates backend.tf automatically
Step 5Configure IAM permissionsRuns terraform init with correct config
Step 6Write backend.tf manuallyDone.
Step 7Run terraform init
Time30-60 minutes per environment2 minutes per environment
Error riskHigh — typos in bucket names, forgotten versioning, wrong table keysLow — deterministic, repeatable automation

DRY Configuration with Include Blocks

Terragrunt’s include block lets you define common configuration once and inherit it everywhere:

# Root terragrunt.hcl
remote_state { ... }  # Defined once

locals {
  account_vars = read_terraformvars_file("account.hcl")
  region_vars  = read_terraformvars_file("region.hcl")
  env_vars     = read_terraformvars_file("env.hcl")
}

inputs = merge(
  local.account_vars.locals,
  local.region_vars.locals,
  local.env_vars.locals,
)

Then in each environment:

# accounts/customer-a/us-east-1/prod/vpc/terragrunt.hcl
include "root" {
  path = find_in_parent_folders()
}

terraform {
  source = "tfr:///terraform-aws-modules/vpc/aws?version=5.1.2"
}

inputs = {
  name = "customer-a-prod-vpc"
  cidr = "10.0.0.0/16"
}

That’s it. The backend configuration, region settings, account details—all inherited from the root terragrunt.hcl. When you need to update something globally (like adding a new S3 tag), you change it in one place.

One place. Not 47 files scattered across your directory tree.

According to Medium’s detailed Terragrunt guide, “Terragrunt offers built-in support for DRY configurations using its include and dependency blocks, significantly reducing boilerplate and improving maintainability.”

In my experience managing 50+ client environments, the code reduction is substantial—I’ve seen projects go from 2,000+ lines of Terraform to approximately 500 lines of Terragrunt configuration, though this ratio varies based on how much duplication your original setup had.

Multi-Account Authentication Made Simple

Terragrunt handles AWS role assumption automatically with the --terragrunt-iam-role flag or iam_role configuration:

# accounts/customer-a/account.hcl
locals {
  account_id = "123456789012"
  iam_role   = "arn:aws:iam::123456789012:role/TerraformDeployRole"
}
# Root terragrunt.hcl
iam_role = local.account_vars.locals.iam_role

Terragrunt calls the sts:AssumeRole API on your behalf and exposes credentials as environment variables when running Terraform. According to Terragrunt’s authentication documentation, this provides “fresh credentials on every run without the complexity of calling assume-role yourself.”

No more manual credential juggling. No need for aws-vault. No credential files written to disk in plaintext.

With Terragrunt (automatic, every run):

  1. You run terragrunt apply
  2. Terragrunt reads iam_role from account.hcl
  3. Terragrunt calls AWS STS AssumeRole API
  4. STS returns temporary credentials
  5. Terragrunt sets AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
  6. Terraform executes with the assumed role
  7. Infrastructure deployed. You didn’t touch a credential.

Without Terragrunt (manual, expires every hour):

  1. Run aws sts assume-role --role-arn arn:aws:iam::123456789012:role/Deploy
  2. Copy AccessKeyId, SecretAccessKey, SessionToken from JSON output
  3. export AWS_ACCESS_KEY_ID=AKIA...
  4. export AWS_SECRET_ACCESS_KEY=wJalr...
  5. export AWS_SESSION_TOKEN=FwoGZX...
  6. Run terraform apply — quickly, you have 60 minutes
  7. Credentials expire mid-apply? Start over from step 1

Dependency Management Between Modules

Terragrunt’s dependency block creates explicit dependencies between modules and passes outputs automatically:

# database/terragrunt.hcl
dependency "vpc" {
  config_path = "../vpc"

  mock_outputs = {
    vpc_id     = "vpc-12345678"
    subnet_ids = ["subnet-12345678", "subnet-87654321"]
  }
  mock_outputs_allowed_terraform_commands = ["validate", "plan"]
}

inputs = {
  vpc_id     = dependency.vpc.outputs.vpc_id
  subnet_ids = dependency.vpc.outputs.subnet_ids
}

The mock_outputs attribute solves a critical problem: you can run terragrunt plan on a fresh setup where nothing’s deployed yet. According to Medium’s dependency guide, “Terragrunt will return an error if the unit referenced in a dependency block has not been applied yet… mock outputs correspond to a map that will be injected in place of actual dependency outputs.”

When you run terragrunt run-all apply, Terragrunt builds the dependency graph and deploys modules in the correct order automatically. Your VPC gets created before your database. No more manual sequencing.

Deploy OrderModuleDepends OnRuns In Parallel With
1stVPC(none)
2ndSecurity GroupsVPC
3rdRDS DatabaseVPC + Security GroupsApplication Load Balancer
3rdApplication Load BalancerVPC + Security GroupsRDS Database
4thECS ServiceALB + RDS

terragrunt run-all apply builds this dependency graph automatically from your dependency blocks. Independent modules at the same level (like RDS and ALB above) run in parallel. No manual sequencing required.


Real-World Example: Before and After

Let me show you the actual difference managing three environments across two customer accounts.

Before (Vanilla Terraform)

Directory structure:

customer-a/
  dev/
    backend.tf       # 15 lines, manually copied
    providers.tf     # 25 lines, manually copied
    vpc/main.tf
    rds/main.tf
  staging/
    backend.tf       # 15 lines, manually copied
    providers.tf     # 25 lines, manually copied
    vpc/main.tf
    rds/main.tf
  prod/
    backend.tf       # 15 lines, manually copied
    providers.tf     # 25 lines, manually copied
    vpc/main.tf
    rds/main.tf
customer-b/
  # Repeat the entire structure...
MetricValue
Total lines of Terraform code~1,800
Manually duplicated backend configs6
Manually created S3 state buckets6
Time to onboard new customer5-7 days

Manual steps for each environment:

  1. Create S3 bucket with correct naming convention
  2. Enable bucket versioning (easy to forget)
  3. Create DynamoDB table for locking with the exact right primary key name
  4. Assume IAM role manually before each deployment—and remember to re-assume when it expires mid-apply
  5. Copy backend and provider configs from another environment
  6. Update all the hardcoded values (bucket names, regions, account IDs)
  7. Run terraform init and pray
  8. Discover you typo’d something in step 6, fix it, start over

After (Terragrunt)

Directory structure:

terragrunt.hcl       # Root config - 40 lines
accounts/
  customer-a/
    account.hcl      # 5 lines (account_id, iam_role)
    us-east-1/
      region.hcl     # 3 lines (region)
      dev/
        env.hcl      # 5 lines (environment vars)
        vpc/terragrunt.hcl        # 10 lines
        rds/terragrunt.hcl        # 15 lines
      staging/
        env.hcl
        vpc/terragrunt.hcl
        rds/terragrunt.hcl
      prod/
        env.hcl
        vpc/terragrunt.hcl
        rds/terragrunt.hcl
  customer-b/
    # Same structure, different account.hcl
MetricValue
Total lines of Terragrunt config~450
Backend configs1 (inherited everywhere)
State bucketsCreated automatically on first apply
Time to onboard new customer1-2 hours

Automated by Terragrunt:

  • S3 bucket creation with versioning
  • DynamoDB table creation
  • IAM role assumption per account
  • Backend configuration generation
  • Dependency ordering
LayerFileWhat It DefinesInherited By
Rootterragrunt.hclremote_state config, common settingsEverything below
Accountaccount.hclaccount_id, iam_roleAll regions/envs in that account
Regionregion.hclregion: us-east-1All environments in that region
Environmentenv.hclenvironment: dev/staging/prodAll modules in that env
Modulevpc/terragrunt.hclModule source + inputs(leaf node — inherits everything above)

How inheritance works: Each terragrunt.hcl at the module level calls find_in_parent_folders() which walks up the directory tree, merging account.hcl + region.hcl + env.hcl into a single configuration. Change the root remote_state block once → every module across every account picks it up automatically.

Without Terragrunt: You’d copy-paste that backend config into 30+ files and manually update each one when something changes.

Vanilla TerraformTerragruntImprovement
Config lines~1,800~45075% reduction
Backend configs6 duplicated blocks1 inherited block83% less duplication
New customer onboarding5-7 days1-2 hours~90% faster
State bucket creationManual (per environment)AutomaticEliminated entirely
IAM role handlingManual export + hourly refreshAutomatic per accountEliminated entirely

In this specific example, that’s a 75% reduction in configuration code and an estimated 90%+ reduction in setup time based on eliminating manual state infrastructure provisioning.

That’s not optimization. That’s elimination of entire categories of work.

According to a case study from AWS Builders, organizations using Terragrunt for multi-account management report similar improvements in deployment velocity and reduced configuration errors.


When You Actually Need Terragrunt

Terragrunt isn’t always the answer (nothing is). Here’s when it makes sense—and when it doesn’t.

Use Terragrunt When:

  • Managing 3+ AWS accounts — the setup overhead pays off once you’re juggling multiple accounts. Below 3? You’re probably fine with vanilla Terraform.
  • Multiple environments per account — dev/staging/prod multiplied across accounts means lots of duplication
  • You’re tired of backend configuration hell — if you’ve copy-pasted backend "s3" blocks more than 10 times, you need this. If you’ve done it 50+ times, you needed this yesterday.
  • Team deployments — multiple people deploying infrastructure across accounts benefit from consistent, automated state management
  • You value DRY principles — if you maintain Terraform modules and hate duplicating inputs across environments

Skip Terragrunt When:

  • Single account with 1-2 environments — vanilla Terraform is fine here, don’t overcomplicate
  • You’re just learning Terraform — master the core tool first, then layer on orchestration
  • Using Terraform Cloud/Enterprise — these tools provide their own state management and workspace handling
  • Your team won’t learn another DSL — Terragrunt adds HCL configuration on top of Terraform’s HCL, which is a learning curve
Your SituationRecommendation
1-2 AWS accounts, simple setupVanilla Terraform is fine
3+ AWS accounts, multiple envsUse Terragrunt
Using Terraform Cloud/EnterpriseDon’t use Terragrunt (incompatible)
Learning Terraform for the first timeMaster Terraform first, add Terragrunt later
Team of 3+ deploying infrastructureUse Terragrunt for consistency
Running OpenTofu instead of TerraformUse Terragrunt (fully compatible)

Don’t use tools you don’t need.

According to Spacelift’s comparison, “Terraform Workspaces are good when you only juggle a handful of modules and environments. Terragrunt tends to pay off as your estate grows—you have more than a small number of stacks or multiple teams managing infrastructure.”

What About Alternatives?

Let me be clear: there are other approaches, but they all have tradeoffs.

AlternativeSolves Backend Duplication?Solves State Setup?Solves Multi-Account Auth?Ecosystem Size
Terraform WorkspacesNo — shared backend, zero isolationNoNoBuilt-in (limited)
AtlantisNoNoPartial — PR automationMedium
TerraspaceYesYesYesSmall (Ruby-based)
TerragruntYesYesYesLarge (Go, since 2016)
Manual scriptsPartial — you build it yourselfPartialPartialN/A (custom)

HashiCorp themselves don’t recommend using workspaces for multi-environment management. Atlantis is great for PR automation but doesn’t solve backend duplication. Terraspace has similar goals but a smaller ecosystem. Manual scripting means rebuilding what Terragrunt already does—and you’ll spend months getting it right.


Common Gotchas and How to Avoid Them

After deploying Terragrunt across 50+ environments, here are the mistakes I’ve seen (and made):

1. Confusion Between generate and remote_state Blocks

Use remote_state—it automatically creates S3 buckets and DynamoDB tables. The generate block just creates a backend.tf file without provisioning infrastructure, which defeats the point. According to BTI360’s analysis, “the generate block introduces a bootstrapping problem: how do you create and manage the underlying storage resources?”

2. Overusing Mock Outputs

Here’s where I’ve seen teams shoot themselves in the foot: mock_outputs let you run plan and validate on fresh infrastructure, which is genuinely useful for CI/CD. But they can mask real configuration issues if you’re not careful.

I worked with a team that had mocked VPC outputs (vpc_id, subnet_ids) that didn’t match the actual module’s output structure. Their CI pipeline ran terragrunt plan successfully for weeks using the mocks—everything looked green. Then they tried to deploy a new environment and the apply failed immediately because the real VPC module returned private_subnet_ids and public_subnet_ids as separate outputs, not a combined subnet_ids list.

Best practice: Use mock_outputs_allowed_terraform_commands = ["validate", "plan"] to restrict when mocks are active. Never rely on mocks for actual deployments. Think of them as training wheels for CI validation, not production guardrails.

3. Not Understanding Dependency Execution Order

When you run terragrunt run-all apply, Terragrunt respects dependencies but runs independent modules in parallel by default. Got 10 VPCs across different accounts with no dependencies? Terragrunt tries to create all 10 simultaneously, which sounds great until you hit AWS API rate limits, overwhelm your CI/CD runner, or end up with interleaved logs that are impossible to debug.

Limit parallelism with:

terragrunt run-all apply --terragrunt-parallelism 3

I usually set this to 2-3 for large deployments. Yes, it’s slower, but you can actually read the output.

4. Forgetting to Define path_relative_to_include()

This function is non-negotiable for unique state file keys:

key = "${path_relative_to_include()}/terraform.tfstate"

Without it, every environment tries to write to the same state file. Ask me how I know. (Spoiler: I spent 3 hours debugging why my dev environment kept deploying prod infrastructure before I realized dev and prod were sharing the same state file key in S3. Every terraform apply in dev was overwriting prod’s state. Do not recommend.)

5. IAM Role Permissions

Your Terragrunt execution role needs permissions to create S3 buckets (with versioning), create DynamoDB tables, assume other IAM roles, and read/write state files. Missing any of these? You’ll get cryptic AccessDenied errors during terragrunt init that don’t tell you which permission is missing. Start with AdministratorAccess in dev to verify your setup works, then lock it down to least-privilege for production.


FAQ

How much does Terragrunt cost compared to vanilla Terraform?

Both are free and open source. Terragrunt is Apache 2.0 licensed (same as Terraform/OpenTofu). Your only cost is the AWS resources—S3 buckets and DynamoDB tables for state management, which you’d be paying for anyway with vanilla Terraform.

Can I migrate existing Terraform projects to Terragrunt?

Yes, and here’s the thing about migration—you don’t have to do it all at once. Start by adding a root terragrunt.hcl with your remote_state configuration, then gradually migrate modules one environment at a time. Terragrunt can work alongside vanilla Terraform since it’s just pointing at your existing .tf files. I’ve successfully run hybrid setups where some environments use Terragrunt while others still use vanilla Terraform during transition periods. The Terragrunt migration guide walks through incremental adoption strategies, but honestly, the path depends on your specific setup. Teams with well-organized directory structures can migrate in a weekend; monolithic “terralith” projects might take weeks to break apart properly.

Does Terragrunt work with Terraform Cloud or Enterprise?

No. Don’t try to make this work. Terraform Cloud expects vanilla Terraform and provides its own state management and workspace features. Running Terragrunt locally to generate configs and then pushing to TFC defeats the purpose of both tools.

What’s the performance impact of the Terragrunt wrapper?

Minimal—we’re talking 1-3 seconds of overhead per run. Terragrunt’s tasks (checking if S3 bucket exists, assuming IAM role, generating backend config) are just AWS API calls that complete quickly. The actual terraform plan and apply operations dominate execution time—if your Terraform run takes 5 minutes, Terragrunt adds maybe 2 seconds to that. I’ve never had a team complain about Terragrunt being slow; they complain about Terraform being slow, which is a different problem entirely.

Can I use Terragrunt with OpenTofu instead of Terraform?

Yes, Terragrunt fully supports OpenTofu. Since OpenTofu is a Terraform fork maintaining backward compatibility, Terragrunt works identically with both. Just install OpenTofu and Terragrunt will detect and use it automatically—no configuration changes needed.


Sources:

Author Bio

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top