In development environments, cost control is critical — and one of the silent culprits of unnecessary spend is Azure Log Analytics. Often, logs in DEV are retained for 30 days by default, even though their long-term value is minimal. This kind of retention policy is excessive and, frankly, wasteful in a non-production setting.

Before making changes to log retention or table configurations, it helps to understand what data is actually being collected. In this article, we’ll use Terraform to retrieve and analyze Log Analytics table usage so we can identify which tables are storing data and how much of it.

Alt

Step 1: List All Log Analytics Tables

We begin by using the azapi_resource_list data source to query all tables in a Log Analytics workspace. This allows us to fetch metadata about the tables, including size and row count.

data "azapi_resource_list" "tables" {
  parent_id = data.azurerm_log_analytics_workspace.main.id
  type      = "Microsoft.OperationalInsights/workspaces/tables@2022-10-01"
}

This query returns a full list of tables under the specified workspace. Each table includes a properties block, which contains usage stats like totalSize and totalRowCount.

Step 2: Filter Tables with Data

Not all tables are active or relevant. Many will have zero rows and take up no space. To focus only on meaningful data, we filter out tables with a total size of zero.

locals {
  tables_json = data.azapi_resource_list.tables.output.value

  # keep only non-zero sizes
  nonzero_tables = [
    for t in local.tables_json : t
    if try(t.properties.totalSize, 0) > 0
  ]
}

Here, try ensures we avoid crashes from missing properties, defaulting to 0 when totalSize isn’t defined. This results in a list of only the tables consuming storage.

Step 3: Extract Table Sizes and Row Counts

With a list of tables that actually contain data, we can now extract metrics we care about: storage size and row count. We also convert the size from bytes to megabytes for easier readability.

locals {
  # derive readable maps
  table_sizes_bytes = { for t in local.nonzero_tables : t.name => t.properties.totalSize }
  table_row_counts  = { for t in local.nonzero_tables : t.name => t.properties.totalRowCount }
  table_sizes_mb    = { for k, v in local.table_sizes_bytes : k => v / 1048576 }
}

This produces three maps: one for size in bytes, one for row count, and one for size in megabytes.

Step 4: Output the Data

Finally, we expose this information using Terraform outputs. These values can be viewed during the Terraform plan or apply process, or used in downstream scripts and dashboards.

output "law_table_sizes_mb" { value = local.table_sizes_mb }
output "law_table_row_counts" { value = local.table_row_counts }

These outputs give you an immediate snapshot of which tables are storing data and how much, which is incredibly useful for cost analysis and cleanup efforts.

Conclusion

Using Terraform and the AzAPI provider, we’ve created a simple but effective way to audit Log Analytics table usage in an Azure environment. This approach helps identify which tables are consuming storage and may justify tuning retention policies — especially in development or test environments where long-term log storage is often unnecessary.

With this data in hand, teams can take the next step: either reducing the retention period, purging the data, or disabling logging for unused services altogether. Either way, you’ll be moving toward a leaner, more cost-effective Azure setup.

Alt