Get a free observability report to evaluate the potential savingsContact us →
Slot Usage3 min read

BigQuery Slot Usage by Minute

Minute-level slot usage is the right granularity for debugging performance issues and understanding burst patterns. This query shows exactly when slot demand spikes and how long the spikes last.

Why This Matters

Hourly averages can mask short but intense slot bursts that cause query queuing and performance degradation. Minute-level data reveals these bursts, helping you determine if your Editions autoscaler max_slots is set high enough and whether query scheduling needs to be adjusted.

How It Works

Aggregates period_slot_ms from JOBS_TIMELINE at MINUTE granularity, dividing by 60,000 (milliseconds per minute). The calendar fill creates a continuous minute-by-minute dataset.

SQL Query

Fill in your details to get a ready-to-run query:

SQL
-- Average slot consumption per minute (zero-filled)

DECLARE lookback_days INT64 DEFAULT 3;
DECLARE ms_per_min INT64 DEFAULT 60000;

WITH minute_slots AS (
  SELECT
    TIMESTAMP_TRUNC(period_start, MINUTE) AS minute,
    ROUND(SUM(period_slot_ms) / ms_per_min, 2) AS avg_slots
  FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
  WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
    AND statement_type != 'SCRIPT'
  GROUP BY minute
),
calendar AS (
  SELECT ts AS minute FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
    TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), MINUTE),
    TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), MINUTE),
    INTERVAL 1 MINUTE)) AS ts
)
SELECT
  c.minute,
  IFNULL(m.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN minute_slots m ON c.minute = m.minute
ORDER BY c.minute
Replace your-project and region-us with your GCP project and dataset region.

Query Explanation

Same pattern as hourly/daily but at minute granularity. The ms_per_min divisor is 60,000 ms. Note that for longer lookback windows (>7 days), this query can return a large result set — reduce lookback_days for minute-level analysis.

Key Insights

  • lightbulb

    Slot spikes lasting only 1-2 minutes are typically ad-hoc queries from users — consider setting per-user slot limits.

  • lightbulb

    Sustained 10-15 minute plateaus indicate batch pipeline runs — optimize the heaviest queries in that window.

  • lightbulb

    If spikes exceed your Editions max_slots, queries are being queued — increase max_slots or stagger workloads.

  • lightbulb

    Compare spike timing to your cron/Airflow schedules to identify which pipeline causes each burst.

Best Practices

  1. 1

    Use this data to set your Editions autoscaler max_slots to cover P99 minute-level demand.

  2. 2

    Keep lookback_days small (1-3 days) when analyzing at minute granularity to avoid huge result sets.

  3. 3

    Cross-reference spikes with the concurrent queries query to see how many queries run simultaneously.

  4. 4

    Set up a CloudClerk alert for sustained slot usage above a threshold.

Want CloudClerk to find these savings automatically?

Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.

Related Guides