How to Calculate Moving Averages in SQL
Why Moving Averages Matter
Moving averages smooth out short-term fluctuations to reveal trends in time series data. They are everywhere in data science: revenue trends, user growth, stock prices, and operational metrics. SQL interviews frequently test your ability to compute moving averages using window functions.
If you can write a moving average query confidently, you demonstrate mastery of window functions — one of the most valued SQL skills in data science interviews.
Simple Moving Average (SMA)
A simple moving average calculates the average of the last N values. In SQL, you implement this with a window function and a frame clause.
Basic Syntax
SELECT
date,
daily_revenue,
AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS revenue_7day_avg
FROM daily_metrics
ORDER BY date;
This computes the average of the current row and the 6 preceding rows, giving you a 7-day moving average.
Understanding Window Frames
The frame clause is the key to moving averages. There are two types:
- ROWS BETWEEN: counts physical rows regardless of gaps
- RANGE BETWEEN: considers the actual values and handles gaps in data
-- ROWS: always looks at exactly 6 preceding rows
AVG(value) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
)
-- RANGE: considers date values, handles missing dates correctly
AVG(value) OVER (
ORDER BY date
RANGE BETWEEN INTERVAL '6 days' PRECEDING AND CURRENT ROW
)
For most interview problems, ROWS BETWEEN is what you want. Use RANGE when your data has gaps and you need to account for them.
Partitioned Moving Average
Often you need a moving average per group, such as per product or per region:
SELECT
product_id,
date,
daily_sales,
AVG(daily_sales) OVER (
PARTITION BY product_id
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS sales_7day_avg
FROM product_sales
ORDER BY product_id, date;
PARTITION BY restarts the window for each product, so the moving average for product A does not include sales from product B.
Handling Edge Cases
Incomplete Windows
At the beginning of your data, there are fewer than N rows available. By default, SQL computes the average with whatever rows are available. If you want to show NULL when the window is incomplete:
SELECT
date,
daily_revenue,
CASE
WHEN ROW_NUMBER() OVER (ORDER BY date) >= 7
THEN AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
)
ELSE NULL
END AS revenue_7day_avg
FROM daily_metrics;
Missing Dates
If your data has missing dates and you use ROWS BETWEEN, the moving average will span more calendar days than intended. Solutions include:
- Use a calendar table to fill gaps before computing the average
- Use RANGE BETWEEN with an interval (PostgreSQL)
-- Fill gaps with a calendar table
WITH calendar AS (
SELECT generate_series('2024-01-01'::date, '2024-12-31'::date, '1 day') AS date
),
filled AS (
SELECT
c.date,
COALESCE(m.daily_revenue, 0) AS daily_revenue
FROM calendar c
LEFT JOIN daily_metrics m ON c.date = m.date
)
SELECT
date,
daily_revenue,
AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS revenue_7day_avg
FROM filled;
Weighted Moving Average
A weighted moving average assigns higher weights to more recent values. SQL does not have a built-in weighted average, but you can implement it:
WITH numbered AS (
SELECT
date,
daily_revenue,
ROW_NUMBER() OVER (ORDER BY date) AS rn
FROM daily_metrics
)
SELECT
a.date,
a.daily_revenue,
SUM(b.daily_revenue * (b.rn - a.rn + 3)) /
SUM(b.rn - a.rn + 3) AS weighted_3day_avg
FROM numbered a
JOIN numbered b
ON b.rn BETWEEN a.rn - 2 AND a.rn
GROUP BY a.date, a.daily_revenue, a.rn
ORDER BY a.date;
In this 3-day weighted average, the most recent day gets weight 3, the previous day gets weight 2, and two days ago gets weight 1.
Exponential Moving Average (EMA)
Exponential moving averages give exponentially decreasing weights to older values. Pure EMA is difficult in standard SQL because it requires recursive computation. However, you can approximate it with a recursive CTE:
WITH RECURSIVE ema AS (
-- Base case: first row
SELECT
date,
daily_revenue,
daily_revenue::numeric AS ema_value,
1 AS rn
FROM daily_metrics
WHERE date = (SELECT MIN(date) FROM daily_metrics)
UNION ALL
-- Recursive case: apply EMA formula
SELECT
m.date,
m.daily_revenue,
(0.3 * m.daily_revenue + 0.7 * e.ema_value)::numeric,
e.rn + 1
FROM ema e
JOIN daily_metrics m
ON m.date = e.date + INTERVAL '1 day'
)
SELECT date, daily_revenue, ROUND(ema_value, 2) AS ema
FROM ema
ORDER BY date;
Here, 0.3 is the smoothing factor (alpha). A higher alpha gives more weight to recent values.
Note that recursive CTEs for EMA are rarely asked in interviews. Understanding the concept is enough — most interviews focus on simple moving averages.
Common Interview Patterns
Compare Current Value to Moving Average
SELECT
date,
daily_revenue,
AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS avg_7day,
daily_revenue - AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS deviation
FROM daily_metrics;
Identify Trend Reversals
WITH averages AS (
SELECT
date,
AVG(daily_revenue) OVER (
ORDER BY date
ROWS BETWEEN 6 PRECEDING AND CURRENT ROW
) AS avg_7day
FROM daily_metrics
)
SELECT
date,
avg_7day,
LAG(avg_7day) OVER (ORDER BY date) AS prev_avg,
CASE
WHEN avg_7day > LAG(avg_7day) OVER (ORDER BY date) THEN 'Up'
WHEN avg_7day < LAG(avg_7day) OVER (ORDER BY date) THEN 'Down'
ELSE 'Flat'
END AS trend
FROM averages;
Practice Problems
For more window function practice, including moving averages, visit the window functions practice page.
Key Takeaways
Simple moving averages with ROWS BETWEEN are the most common interview pattern. Understand the difference between ROWS and RANGE frames. Know how to handle edge cases like incomplete windows and missing dates. Weighted and exponential moving averages are advanced topics that demonstrate deeper SQL knowledge when you can explain the concepts clearly.
Ready to test your skills?
Practice real Window Functions interview questions from top companies — with solutions.
Get interview tips in your inbox
Join data scientists preparing smarter. No spam, unsubscribe anytime.