Get a free observability report to evaluate the potential savingsContact us →
Cost Analysis3 min read

BigQuery Cost Analysis by Query

This query gives you a line-by-line view of every query execution in your project, including its on-demand cost, slot usage, and billed duration. It is the most granular cost analysis query available — perfect for deep-diving into individual expensive queries.

Why This Matters

Aggregate cost views (by user, by label) tell you where the money goes at a high level, but to actually fix the problem you need to see individual queries. This per-query breakdown lets you identify the exact SQL statements that consume the most resources and prioritize optimizations with the highest ROI.

How It Works

The query reads from INFORMATION_SCHEMA.JOBS_BY_PROJECT and calculates several cost metrics for each query execution: approximate slot count, on-demand cost (based on bytes billed), billed slot hours (for Editions comparison), and execution count. It also applies BigQuery's minimum billing rules — 10 MiB minimum per query and 1-minute minimum duration for Editions.

SQL Query

Fill in your details to get a ready-to-run query:

SQL
-- Per-query cost breakdown with slot and duration metrics

DECLARE lookback_days INT64 DEFAULT 7;

WITH base_jobs AS (
  SELECT
    project_id,
    query,
    start_time,
    end_time,
    COALESCE(total_bytes_billed, 0) AS bytes_billed,
    total_slot_ms,
    TIMESTAMP_DIFF(end_time, start_time, MILLISECOND) AS duration_ms,
    ROUND(SAFE_DIVIDE(total_slot_ms,
      TIMESTAMP_DIFF(end_time, start_time, MILLISECOND)), 2) AS avg_slots
  FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT
  WHERE creation_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
    AND state = 'DONE' AND total_slot_ms IS NOT NULL
),
with_billing AS (
  SELECT *,
    CEIL(avg_slots / 100) * 100 AS billed_slots,
    GREATEST(duration_ms, 60000) / 3.6e6 AS billed_hours,
    GREATEST(bytes_billed, 10 * POW(1024, 2)) AS billed_bytes
  FROM base_jobs
),
freq AS (
  SELECT query, COUNT(*) AS exec_count FROM base_jobs GROUP BY query
)
SELECT
...
Replace your-project and region-us with your GCP project and dataset region.

Query Explanation

The query calculates approximate slot count by dividing total_slot_ms by execution time. It applies BigQuery's minimum billing rules: 10 MiB minimum bytes and 1-minute minimum duration. Slot count is rounded up to the nearest 100 (matching autoscaler increments). The on-demand cost is computed at $6.25/TiB, and slot hours are calculated for Editions cost comparison.

Key Insights

  • lightbulb

    Queries with high on-demand cost but low slot count are scanning too much data — they need partition filters or column pruning.

  • lightbulb

    Queries with high slot hours but low bytes billed are computationally expensive — look for complex JOINs, DISTINCT, or window functions.

  • lightbulb

    High execution count queries are candidates for result caching or materialized views.

  • lightbulb

    Compare on_demand_cost_usd vs slot_hours to determine which billing model would be cheaper per query.

Best Practices

  1. 1

    Focus optimization effort on the top 10 queries by cost — they typically represent 60-80% of total spend.

  2. 2

    For frequently-executed queries, consider materializing results or using BI Engine.

  3. 3

    Use EXPLAIN to understand the query execution plan before and after optimization.

  4. 4

    Set up automated alerts for queries exceeding a cost threshold.

Want CloudClerk to find these savings automatically?

Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.

Related Guides