In complex infrastructure automation setups, it’s common to break down deployments into multiple Terraform components or stacks. Each component might handle a separate responsibility — networking, compute, identity — and the outputs of one are often essential inputs for the next. Passing data cleanly and reliably between these components is critical for creating maintainable and scalable infrastructure as code workflows.

A Familiar Scenario

In one of my recent projects, I found myself needing to bridge two Terraform components. The first component deployed resources, and the second component needed to reference those resources. While Terraform supports output variables for this very reason, getting them from one stage to the next — especially in orchestrated environments — can be less straightforward than it seems.

When working with Terraform, the terraform output command is typically used to retrieve specific output variables—what I like to call “spearfishing” an output. This approach works well for targeting individual values. But when you’re dealing with complex workflows, such as chaining multiple Terraform deployments together, you may need to pass a comprehensive set of outputs between stages. That’s when things start to get interesting—and a little tricky.

The Need for Structured Output

In a recent project, I was working with a deployment orchestrator that required output data in a structured JSON format in order to pass information from one step to the next. Rather than retrieve each Terraform output one by one, I opted to dump the entire output set in JSON by running:

terraform output -json

This seemed like a straightforward solution. However, the orchestrator had an additional quirk: it didn’t support accessing the raw output JSON directly. Instead, it allowed referencing only properties within a single root object. To adapt, I encapsulated the entire output under a new property called terraform_output, using a simple jq command:

terraform output -json | jq '{ terraform_output: . }'

This transformed the output into a structure I could easily pass to the next deployment stage. Or so I thought.

Unexpected Variable Errors

Upon triggering the next Terraform apply step, I was met with unexpected errors:

No value for required variable: The root module input variable “hosted_pool_identity” is not set, and has no default value. Use a -var or -var-file command line argument to provide a value for this variable.

This was confusing. I had my *.tfvars.json file in the right location and was referencing it correctly with the -var-file option. Why was Terraform complaining?

The Realization: Output JSON vs Input JSON

After some digging, I discovered the core issue: the JSON format produced by terraform output -json is not the same as the format expected by Terraform input variable files.

Here’s what the output for a variable like hosted_pool_identity looks like in the Terraform JSON Output:

"hosted_pool_identity": {
    "sensitive": false,
    "type": [
        "object",
        {
            "name": "string",
            "resource_group": "string"
        }
    ],
    "value": {
        "name": "mi-foo",
        "resource_group": "rg-foo"
    }
}

The input variable on the second Terraform root module looked like this:

variable "hosted_pool_identity" {
  type = object({
    resource_group = string
    name           = string
  })
}

I double and triple checked the property names of the object to make sure they were correct but alas, I couldn’t see the forest from the trees!

It was only after I attempted to use the Terraform Output JSON as the JSON Variable Values File *.tfvars.json that I realized the error in my ways!

Compare this to the format required for JSON input variables:

"hosted_pool_identity": {
    "name": "mi-foo",
    "resource_group": "rg-foo"
}

Notice the difference? The input format strips away the type, sensitive, and value wrappers. It’s a clean, direct representation of the variable value—just like you’d define it in a standard HCL *.tfvars file.

The Final Fix

With this insight, I returned to jq for a final transformation—this time to peel back the layers of the output JSON and convert it into a valid input format:

terraform output -json | jq '{ terraform_output: with_entries({ key: .key, value: .value.value }) }'

This command reformats each entry by replacing the full output structure with just the .value contents, preserving the key names. It provided the clean input JSON structure the second Terraform apply stage was expecting.

Conclusion

What began as a routine automation task ended up highlighting an important nuance in how Terraform handles JSON for outputs versus inputs. It’s easy to conflate the two, especially when dealing with complex objects and orchestrated deployments.

Passing outputs from one Terraform stack component to another is an essential but easily misunderstood practice. The confusion often lies in Terraform’s dual roles: producing detailed outputs for introspection and requiring minimal data for input. In my case, I just happened to be using JSON as the transport format which probably contributed a little to my confusion.

Understanding this distinction — and knowing how to massage the data with tools like jq—can save you hours of debugging and head-scratching. As always, the devil is in the (JSON) details.