When adopting Terraform Cloud for Azure, one of the first friction points is setting up your infrastructure-as-code (IaC) environment in a consistent and repeatable way.

This includes organizing your repository, defining reusable modules, configuring Azure permissions, and setting up federated identity credentials through Entra ID (formerly Azure AD). Doing this manually is slow and error-prone.

To simplify this, I’ve created a baseline template that helps bootstrap new Terraform stacks for Azure. It provides a structure for your repository, Terraform configuration, and a module to automate the setup of Entra ID applications and federated identity credentials. There are still a few manual steps — some “ClickOps” for now — but the heavy lifting is automated, and future work may evolve this into an AT-AT (Automate the Automation with Terraform) module.

Go checkout the code here! https://github.com/Azure-Terraformer/terraform-stacks-bootstrap-baseline

Step 1: Create a GitHub Repo for your Stack

Start by creating a dedicated GitHub repository for your Terraform stack. While it may be tempting to use your application code repository, Terraform Cloud has strict file size limitations that make this impractical. A separate repo ensures a clean separation of concerns and helps avoid size issues.

Your root directory should include the following files:

variables.tf
providers.tf
deployments.tf
components.tf
.terraform-version

Simple Baseline Module

Each component is structured as a separate root module. I keep all these in a src/terraform folder, with each subfolder representing a logical unit of infrastructure. For simple solutions, I usually begin with a shared module containing baseline infrastructure. This pattern provides flexibility to grow as the system becomes more complex. The shared module includes boilerplate resources common to all environments.

data "azurerm_client_config" "current" {}

resource "azurerm_resource_group" "main" {
  name     = "rg-${var.application_name}-${var.environment_name}-shared"
  location = var.location

  tags = var.tags
}

resource "random_string" "suffix" {
  length  = 8
  lower   = true
  numeric = true
  upper   = false
  special = false
}

The only real output right now is the resource group name and the suffix.

output "resource_group_name" {
  value = azurerm_resource_group.main.name
}
output "resource_suffix" {
  value = random_string.suffix.result
}

But as things get more complex as far as shared infrastructure you can come up with additional outputs that your downstream components might need like a Log Analytics Workspace, Container Registry, or a Key Vault.

Defining Components

My components just stamps out this shared module. The shared module is referenced in a component block like this:

component "shared" {
  source = "./src/terraform/shared"

  inputs = {
    location         = var.primary_location
    application_name = var.application_name
    environment_name = var.environment_name
    tags             = var.tags
  }

  providers = {
    azurerm = provider.azurerm.this
    random  = provider.random.this
  }
}

Define Deployments

I define two deployments — dev and prod. Each deployment specifies its own configuration, while shared values like the tenant ID and application name are declared as locals.

identity_token "azurerm" {
  audience = ["api://AzureADTokenExchange"]
}

locals {
  client_id        = "00000000-0000-0000-0000-000000000000"
  tenant_id        = "00000000-0000-0000-0000-000000000000"
  primary_location = "westus3"
  application_name = "foobar-app1"
}

deployment "dev" {
  inputs = {
    identity_token   = identity_token.azurerm.jwt
    client_id        = local.client_id
    tenant_id        = local.tenant_id
    primary_location = local.primary_location
    application_name = local.application_name
    subscription_id  = "00000000-0000-0000-0000-000000000000"
    environment_name = "dev"

    tags = {
      application = local.application_name
      environment = "dev"
    }
  }
}

deployment "prod" {
  inputs = {
    identity_token   = identity_token.azurerm.jwt
    client_id        = local.client_id
    tenant_id        = local.tenant_id
    primary_location = local.primary_location
    application_name = local.application_name
    subscription_id  = "00000000-0000-0000-0000-000000000000"
    environment_name = "prod"

    tags = {
      application = local.application_name
      environment = "prod"
    }
  }
}

Notice how I group the attributes that change per environment together and I use local variables to declare the attributes that stay the same across all environments. In most cases that Is the Tenant ID and the Application Name. If you want to tighten security you could also vary the Client ID used for different environments by changing the client_id . However, to keep things simple for my boot strapping I’m reusing the same Entra ID Application to manage my entire stack.

The prod deployment is nearly identical, changing only the environment-specific values.

Provider and Variable Configuration

The provider configuration uses OIDC to authenticate with Azure using a federated identity token:

required_providers {
  azurerm = {
    source  = "hashicorp/azurerm"
    version = "~> 4.38.0"
  }
  random = {
    source  = "hashicorp/random"
    version = "~> 3.7.2"
  }
}

provider "azurerm" "this" {
  config {
    features {}

    use_cli = false

    use_oidc        = true
    oidc_token      = var.identity_token
    client_id       = var.client_id
    subscription_id = var.subscription_id
    tenant_id       = var.tenant_id
  }
}

provider "random" "this" {}

The corresponding variables are straightforward:

variable "application_name" {
  description = "Name of the application"
  type        = string
}

variable "environment_name" {
  description = "Name of the environment"
  type        = string
}

variable "tags" {
  description = "Tags for the resources"
  type        = map(string)
}

variable "identity_token" {
  type      = string
  ephemeral = true
}

variable "client_id" {
  type = string
}

variable "subscription_id" {
  type = string
}

variable "tenant_id" {
  type = string
}

variable "primary_location" {
  description = "Primary Region"
  type        = string
}

The last thing you need to do is to run terraform stacks init from the root folder of the repo. This will initialize the .terraform.lock.hcl and ensure your providers are properly configured.

Step 2: Entra ID Application and Federated Identity Setup

With the code and deployments in place, the next step is to configure Entra ID. This is currently handled by a lightweight Terraform module designed to run locally using the CLI. Eventually, this will be wrapped in an AT-AT module.

I created another root module, this one designed to run locally using Terraform CLI — eventually I’ll probably expand this to be it’s own AT-AT (Automate the Automation with Terraform) but for now it’s a lightweight stand-alone module that just sets up the Entra ID application and the necessary federated identity credentials.

The module sets up:

  • An Entra ID application
  • A service principal
  • Role assignments in target subscriptions
  • Federated identity credentials for Terraform Cloud

My module takes in these input variables:

variable "organization" {
  type = string
}
variable "application_name" {
  type = string
}
variable "service_name" {
  type = string
}
variable "dev_subscription" {
  type = string
}
variable "prod_subscription" {
  type = string
}

The Organization, Application Name and Service Name will be used to construct the Organization, Project and the Stack we want to create.

It’s important you know these values later as they are critical for properly configuring the Entra ID Federated Identity Credentials.

Your inputs should look something like this:

organization      = "foobar"
application_name  = "app1"
service_name      = "devops"
dev_subscription  = "00000000-0000-0000-0000-000000000000"
prod_subscription = "00000000-0000-0000-0000-000000000000"

You will need to be a super user in Entra ID to run Terraform Apply on this module because we’ll be doing a lot of things. At the very least you’ll need to be able to Read/Write Applications.

resource "azuread_application" "main" {
  display_name = "hcp-${var.organization}-${var.application_name}-${var.service_name}-stack"
}

resource "azuread_service_principal" "main" {
  client_id = azuread_application.main.client_id
}

The next part I am going to get a reference to the subscription and create a role assignment granting the appplication Owner of the subscription. This will allow Terraform Cloud to exert its will over this Azure Subscription.

data "azurerm_subscription" "dev" {
  subscription_id = var.dev_subscription
}

resource "azurerm_role_assignment" "dev_owner" {
  provider = azurerm.dev

  scope                = data.azurerm_subscription.dev.id
  role_definition_name = "Owner"
  principal_id         = azuread_service_principal.main.object_id
  principal_type       = "ServicePrincipal"
}

Next I will use my own Terraform Module that creates the Federated Identity Credentials for the Entra ID Application:

module "dev" {
  source  = "Azure-Terraformer/terraform-cloud-credential/azuread//modules/stacks/core-workflow"
  version = "1.0.1"

  application_id = azuread_application.main.id
  organization   = var.organization
  project        = var.application_name
  stack          = "${var.application_name}-${var.service_name}"
  deployment     = "dev"

}

This code is basically the same for the prod subscription. It’s important to note that I had to define an azurerm provider block for each subscription I wanted to setup.

provider "azurerm" {
  alias = "dev"
  features {}
  subscription_id = var.dev_subscription
}

provider "azurerm" {
  alias = "prod"
  features {}
  subscription_id = var.prod_subscription
}

Step 5: Create the Stack in Terraform Cloud

With everything else in place, go to the Terraform Cloud UI and create a new stack under your organization and project. This is the final step to wire everything together — your repository, Entra ID credentials, and Terraform Cloud can now work in sync.

Conclusion

Setting up Terraform Cloud for Azure doesn’t have to be tedious. By bootstrapping your projects with a well-structured codebase, shared infrastructure modules, and a dedicated Entra ID configuration, you can simplify onboarding and eliminate common errors. While some steps still require manual setup, this foundation paves the way for more advanced automation and a smoother developer experience.

Next steps? I’d love to convert the remaining ClickOps steps into an AT-AT module, and you’ll have a near fully-automated, scalable approach to managing Azure with Terraform Cloud — but to do that I’d need to spend some time figuring out the Terraform Provider for Terraform Cloud — or is it Terraform Enterprise? I’m at a bit of a loss — since I haven’t spent a lot of time with either solutions until recently with my explorations into Terraform Stacks!

Alt