Log Analytics in Azure can be a powerful tool for gathering and analyzing telemetry, but as data grows, so do costs. Old logs that are no longer useful may linger for months, quietly inflating your Azure bill. While data retention policies help, sometimes you need more fine-grained or immediate control over what data is kept. That’s where purging specific tables can be useful.

In this article, I’ll show how I automated the purging of Log Analytics tables using Terraform and the AzAPI provider. I started with a single table, then evolved the solution into a reusable Terraform module that can purge multiple tables across your environment — all while dynamically calculating age-based filters.

Step 1: Purging a Single Table with AzAPI

To begin, I tested purging a single table — AppTraces — using the azapi_resource_action resource. The key parameters here include the workspace ID, the table name, and a filter that defines which records to purge. Here’s the initial version:

resource "azapi_resource_action" "purge_apptraces" {
  type        = "Microsoft.OperationalInsights/workspaces@2025-07-01"
  resource_id = data.azurerm_log_analytics_workspace.main.id
  action      = "purge"
  method      = "POST"

  body = {
    table = "AppTraces"
    filters = [{
      column   = "TimeGenerated"
      operator = "<"
      value    = "2025-11-01T00:00:00Z"
    }]
  }

  # Prevent re-runs unless you change this on purpose.
  # You can also wire this to a specific trigger input instead of timestamp().
  lifecycle {
    ignore_changes       = [body]
  }
}

This successfully purged records older than November 1, 2025. On the next terraform apply, nothing happened — as expected — because the body content hadn’t changed, and we had instructed Terraform to ignore it.

Step 2: Making the Purge Dynamic

To avoid hardcoding dates, I wanted the purge filter to always delete data older than X days — in this case, one day. Terraform’s timestamp() and timeadd() functions made this easy:

locals {
  one_day_ago = timeadd(timestamp(), "-24h")
}

However, just changing a local value like this wouldn’t trigger Terraform to re-run the purge. To force re-execution, I introduced a terraform_data resource with a dynamic timestamp in its triggers_replace block:

resource "terraform_data" "apply_tick" {
  triggers_replace = {
    ts = timestamp() # changes every plan/apply
  }
}

Then I updated the azapi_resource_action resource to use this as a trigger:

resource "azapi_resource_action" "purge_apptraces" {
  type        = "Microsoft.OperationalInsights/workspaces@2025-07-01"
  resource_id = data.azurerm_log_analytics_workspace.main.id
  action      = "purge"
  method      = "POST"

  body = {
    table = "AppTraces"
    filters = [{
      column   = "TimeGenerated"
      operator = "<"
      value    = local.one_day_ago
    }]
  }

  # Prevent re-runs unless you change this on purpose.
  # You can also wire this to a specific trigger input instead of timestamp().
  lifecycle {
    ignore_changes       = [body]
    replace_triggered_by = [terraform_data.apply_tick]
  }
}

With this setup, the purge action is re-triggered on every apply, always using the updated timestamp to remove older logs. It worked great! However, I have many, many tables in log analytics workspace. I don’t want to copy pasta this. so let’s make a module!

Step 3: Scaling with a Module

Since I have many tables in my Log Analytics workspace, copy-pasting this block for each one wasn’t sustainable. The next logical step was to turn the purge logic into a reusable Terraform module.

The module takes just two inputs:

variable "workspace_id" {
  type = string
}
variable "table" {
  type = string
}

I encapsulate the AzAPI action resource, the mechanism to do the replacement and the time calculation.

resource "terraform_data" "apply_tick" {
  triggers_replace = {
    ts = timestamp() # changes every plan/apply
  }
}

locals {
  one_day_ago = timeadd(timestamp(), "-24h")
}

resource "azapi_resource_action" "purge_apptraces" {
  type        = "Microsoft.OperationalInsights/workspaces@2025-07-01"
  resource_id = var.workspace_id
  action      = "purge"
  method      = "POST"

  body = {
    table = var.table
    filters = [{
      column   = "TimeGenerated"
      operator = "<"
      value    = local.one_day_ago
    }]
  }

  # Prevent re-runs unless you change this on purpose.
  # You can also wire this to a specific trigger input instead of timestamp().
  lifecycle {
    ignore_changes       = [body]
    replace_triggered_by = [terraform_data.apply_tick]
  }
}

A future improvement would be to introduce a variable like purge_older_than_days to make the age filter configurable per table. That would be a good improvement.

Step 4: Applying the Module Across Tables

To apply this module across all relevant Log Analytics tables, I defined a local.tables list and used for_each to instantiate the module for each one:

locals {
  tables = [
    "Alert",
    "AppAvailabilityResults",
    "AppBrowserTimings",
    "AppCenterError",
    "AppDependencies",
    "AppEvents",
    "AppExceptions",
    "AppMetrics",
    "AppPageViews",
    "AppPerformanceCounters",
    "AppRequests",
    "AppSystemEvents",
    "AppTraces",
    "AzureActivity",
    "ContainerAppConsoleLogs_CL",
    "ContainerAppSystemLogs_CL",
    "InsightsMetrics",
    "Operation",
    "Usage",
  ]
}

Finally declare the module and make sure to iterate over all my tables.

module "purge_tables" {

  source = "./modules/log-analytics-purge"

  for_each = toset(local.tables)

  workspace_id = data.azurerm_log_analytics_workspace.main.id
  table        = each.value

}

This allows me to purge all tables with one terraform apply, using consistent logic across the board.

Conclusion

With this approach, I’ve turned a tedious and error-prone process into a clean, scalable Terraform module that helps keep my Log Analytics costs under control. By leveraging the AzAPI provider and Terraform’s powerful dynamic and lifecycle features, I can purge stale log data across multiple tables — automatically and regularly.

Next steps could include parameterizing the purge age threshold, adding conditional logic for table-specific behavior, or even tying purge execution to external events or pipelines.

If you’re working with DEV environments, this pattern may help you gain better control over data retention and cost — without sacrificing the benefits of observability for break fix and troubleshooting!

Alt