BigQuery Slot Usage by Hour
Hourly slot usage reveals the intra-day patterns in your BigQuery workloads. This query shows you exactly when your peak and off-peak hours occur, enabling smarter reservation scheduling and pipeline orchestration.
Why This Matters
Most BigQuery workloads have predictable hourly patterns — morning ETL runs, mid-day analyst activity, evening batch processing. Understanding these patterns lets you schedule reservations to match demand, stagger pipelines to avoid contention, and set meaningful alerts for anomalous usage.
How It Works
Like the daily version, this query aggregates period_slot_ms from JOBS_TIMELINE, but truncates to HOUR granularity and divides by milliseconds-per-hour (3,600,000). The calendar fill ensures every hour has a data point.
SQL Query
Fill in your details to get a ready-to-run query:
-- Average slot consumption per hour (zero-filled time series)
DECLARE lookback_days INT64 DEFAULT 7;
DECLARE ms_per_hour INT64 DEFAULT 3600000;
WITH hourly_slots AS (
SELECT
TIMESTAMP_TRUNC(period_start, HOUR) AS hour,
ROUND(SUM(period_slot_ms) / ms_per_hour, 2) AS avg_slots
FROM `your-project`.`region-us`.INFORMATION_SCHEMA.JOBS_TIMELINE
WHERE period_start >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY)
AND statement_type != 'SCRIPT'
GROUP BY hour
),
calendar AS (
SELECT ts AS hour FROM UNNEST(GENERATE_TIMESTAMP_ARRAY(
TIMESTAMP_TRUNC(TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL lookback_days DAY), HOUR),
TIMESTAMP_TRUNC(CURRENT_TIMESTAMP(), HOUR),
INTERVAL 1 HOUR)) AS ts
)
SELECT
c.hour,
IFNULL(h.avg_slots, 0) AS avg_slots
FROM calendar c
LEFT JOIN hourly_slots h ON c.hour = h.hour
ORDER BY c.hourQuery Explanation
Same approach as the daily query but at hour granularity. The ms_per_hour divisor is 3,600,000. The generated time series fills gaps ensuring a continuous hourly dataset for visualization.
Key Insights
Peak hours often coincide with scheduled ETL/dbt runs — stagger them to reduce peak slot demand.
Off-peak hours with near-zero usage represent wasted reservation capacity under flat-rate pricing.
Identify the hour with highest average slot usage — this is your P50 peak and the basis for Editions sizing.
Hours with highly variable slot usage across days suggest ad-hoc workloads that benefit from autoscaling.
Best Practices
- 1
Stagger scheduled jobs across hours to flatten the slot usage curve.
- 2
Use autoscaled Editions reservations to handle hourly peaks without over-provisioning.
- 3
Set up hourly cost alerts to catch runaway queries within the same business day.
- 4
Run heavy batch jobs during off-peak hours (late night / early morning) when slot contention is lowest.
Want CloudClerk to find these savings automatically?
Our platform connects to your BigQuery project, runs these analyses automatically, and delivers AI-powered optimisation recommendations — all with your data fully anonymised.
Related Guides
BigQuery Slot Usage by Day
Track daily BigQuery slot consumption over time. Identify daily usage patterns and trends to right-size your slot reservations.
Read guideBigQuery Slot Usage by Minute
Get minute-level BigQuery slot consumption data. Essential for debugging performance issues and understanding burst slot demand.
Read guideBigQuery Slot Usage by Second
Second-level BigQuery slot consumption for detailed burst analysis. The finest granularity available for performance debugging.
Read guideBigQuery Concurrent Queries by Minute
Count how many BigQuery queries run simultaneously each minute. Detect concurrency bottlenecks that cause queuing and slow performance.
Read guide