The most commonly used Demand Metrics in the profession are:

**Forecast Attainment-**How much of the forecast we actually attained, in essence a comparison of Sales to Forecast from a prior period**Forecast Bias-**Sum of signed forecast errors over either actual or forecast**Mean Absolute Percent Error-**The traditional MAPE used by academics to infer the quality of the model or Model Fit.**Weighted Absolute Percent Error-**The Classic Weighted MAPE used to measure the SKU level forecast error in most supply chains - volume weighted.**Mean Percent Error-**An average of individual SKU level MAPEs, not a very useful measure**Root Mean Squared Error (RMSE)-**Average of squared errors, a more rigorous measure since it weights higher errors more heavily**Rolling out-of-sample errors-**You calculate the average error of the same forecast at different lags using different hold-out samples in each run of the forecast

One of the most commonly asked questions through this website is the denominator for MAPE. Why do we recommend using the Actual Demand instead of the forecast as the denominator? This question comes in many forms.

A reader from Australia asks: "What are the merits of dividing the error by Actual Vs Forecast?"

Another reader from Philadelphia asks: "I am intrigued by your example for calculating Percent Error for a forecast. Why would your formula not be (Actual - Forecast)/Forecast? Would Forecast not be the baseline measurement?"

Traditionally Forecast used to be the baseline measurement since senior management was interested in how actual sales compared to forecast or plan. However as a performance measure, this can create some subtle biases especially if used to measure how a deviation compares to the expectation.

Forecast will be the baseline measurement, if our only goal is to beat the forecast. For example if the Sales personnel are incentivized by how much they beat the forecast target by, then of course we want to use the forecast as the denominator. But this hardly does any good to any supply chain. Beating the forecast by a whisper is good, but not by a mile.

So we want a measure that emphasizes the magnitude of the error rather than how it compares to a baseline. In a low error business, the denominator becomes a moot point. Since Actuals will be close to forecast if error is small, the bias introduced is of second order importance.

However, if the error is of some magnitude, there is potential to play games with the forecast error measure by what is called as denominator management. If the error is divided by forecast, this may introduce some forecast biases. One of the most important bias will be to artificially over forecast. At the margin, over forecasting will reduce the percentage error and increase accuracy. So when in doubt, the forecast will be high.

You may want to observe by constructing a case study using your current clients. Observe the clients that use Forecast in the denominator and observe how often there is over forecasting.

So it is better to divide by actuals, since the actual demand is under no one's control. Although this may lead to some under forecasting biases, this is not as severe as the other bias. So traditionally we divide the error by actual demand to arrive at the classic MAPE.

Here is an interesting site that describes forecast error and implications of under and over forecasting.

Please contact us, if you have more questions on quantitative implications of the bias from denominator management.

©2004-2016 by Demand Planning, LLC. All rights reserved. Privacy
policy | Refund and Exchange policy
| Terms of Service | FAQ

Demand Planning, LLC is based in Boston, MA | Phone: (781) 995-0685 | Email us!

Diagnostic | DP Design | Exception Management | S&OP | Solutions