BigQuery Slot Usage by Minute
Minute-level slot usage is the right granularity for debugging performance issues and understanding burst patterns. This query shows exactly when slot demand spikes and how long the spikes last.
Why This Matters
Hourly averages can mask short but intense slot bursts that cause query queuing and performance degradation. Minute-level data reveals these bursts, helping you determine if your Editions autoscaler max_slots is set high enough and whether query scheduling needs to be adjusted.
How It Works
Aggregates period_slot_ms from JOBS_TIMELINE at MINUTE granularity, dividing by 60,000 (milliseconds per minute). The calendar fill creates a continuous minute-by-minute dataset.
SQL Query
Fill in your details to get a ready-to-run query:
-- Average slot consumption per minute (zero-filled)
DECLARE lookback_days INT64 DEFAULT 3;
DECLARE ms_per_min INT64 DEFAULT 60000;
WITH minute_slots AS (
SELECT
TIMESTAMP_TRUNC(period_start, MINUTE) AS minute,
ROUND(SUM(period_slot_ms) / ms_per_min, 2) AS avg_slots
FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
AND statement_type != 'SCRIPT'
GROUP BY minute
),
calendar AS (
SELECT ts AS minute FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), MINUTE),
TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), MINUTE),
INTERVAL 1 MINUTE)) AS ts
)
SELECT
c.minute,
IFNULL(m.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN minute_slots m ON c.minute = m.minute
ORDER BY c.minuteQuery Explanation
Same pattern as hourly/daily but at minute granularity. The ms_per_min divisor is 60,000 ms. Note that for longer lookback windows (>7 days), this query can return a large result set — reduce lookback_days for minute-level analysis.
Key Insights
Slot spikes lasting only 1-2 minutes are typically ad-hoc queries from users — consider setting per-user slot limits.
Sustained 10-15 minute plateaus indicate batch pipeline runs — optimize the heaviest queries in that window.
If spikes exceed your Editions max_slots, queries are being queued — increase max_slots or stagger workloads.
Compare spike timing to your cron/Airflow schedules to identify which pipeline causes each burst.
Best Practices
- 1
Use this data to set your Editions autoscaler max_slots to cover P99 minute-level demand.
- 2
Keep lookback_days small (1-3 days) when analyzing at minute granularity to avoid huge result sets.
- 3
Cross-reference spikes with the concurrent queries query to see how many queries run simultaneously.
- 4
Set up a CloudClerk alert for sustained slot usage above a threshold.
Want CloudClerk to find these savings automatically?
Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.
Related Guides
BigQuery Slot Usage by Hour
Monitor hourly BigQuery slot consumption to identify peak usage windows and optimize your reservation scheduling.
Read guideBigQuery Slot Usage by Day
Track daily BigQuery slot consumption over time. Identify daily usage patterns and trends to right-size your slot reservations.
Read guideBigQuery Slot Usage by Second
Second-level BigQuery slot consumption for detailed burst analysis. The finest granularity available for performance debugging.
Read guideBigQuery Concurrent Queries by Minute
Count how many BigQuery queries run simultaneously each minute. Detect concurrency bottlenecks that cause queuing and slow performance.
Read guide