1. Introduction
In the era we live in, it is really important to learn how to use data properly and take advantage of it. Retailers know this well and thus make sales forecasts in order to plan ahead. This can help them to know what quantity of products to order, have in stock, have in storage, and so on, reducing costs while keeping the customers supplied.
Yet, how can we know that our forecast is good enough? How can we measure its accuracy?
In this tutorial, we’ll analyze different ways to do so.
2. Problem
Let’s imagine we have a small business that sells gallons of milk in a village. We want to predict how much we’ll sell next week, from Monday to Wednesday. We know that most people in this village tend to buy a lot of milk on Mondays and thus don’t buy it anymore on Tuesdays, running out of it by Wednesday. Taking into account previous sales, we create a simple model and come up with this forecast (here compared to the actual sales we obtain later):
Monday
Tuesday
Wednesday
Forecast
55
2
50
Actual sales
50
1
50
Now, we can calculate the average error obtained by the forecast using the Mean Absolute Error formula: , being the forecast and the actual value at time .
This results in a Mean Absolute Error of 2. This is not very useful, as it just tells us that the forecast miscalculated by 2 gallons of milk. We need better information relative to our sales.
3. MAPE
MAPE is one of the most common methods to measure forecast accuracy. It means Mean Absolute Percentage Error and it measures the percentage error of the forecast in relation to the actual values. As it calculates the average error over time or different products, it doesn’t differentiate between them. This means that it assumes no preference between what day or what product to predict better. It is calculated as follows:
Calculating it in our forecast results in:
Monday
Tuesday
Wednesday
Total
Forecast
55
2
50
Actual sales
50
1
50
MAPE
10%
100%
0%
36.7%
Here, we can see the main weakness of MAPE. When sales are low, the value of MAPE bloats up and can therefore show a deceiving result, as it is the case.
Even though the forecast is off by only 2 gallons out of a total of 102 sold, the actual MAPE is 36.7%. This is because we predicted we were going to sell 2 gallons on Tuesday but ended up selling 1 instead, which results in an error of 100% that day. In order to solve this, WAPE should be used instead.
4. WAPE
WAPE, also referred to as the MAD/Mean ratio, means Weighted Average Percentage Error. It weights the error by adding the total sales:
In our example:
Monday
Tuesday
Wednesday
Total
Forecast
55
2
50
107
Actual sales
50
1
50
101
WAPE
5.9%
Now we can see how the error makes more sense, resulting in 5.9%. When the total number of sales can be low or the product analyzed has intermittent sales, WAPE is recommended over MAPE.
5. WMAPE
As mentioned before, neither MAPE nor WAPE take into account possible differences in priority between products or moments in time. In our example, let’s say we find Monday the most important day to predict. How can we reflect this in our prediction error?
WMAPE can be used when the use case requires this. It means Weighted Mean Absolute Percentage Error and is calculated as follows:
This formula allows us to give weights, thus importance, to different factors.
Given our example, let’s say we give Monday the importance of 80% over the other days, which are given 10% each. This would result in:
Monday
Tuesday
Wednesday
Weighted Total
Forecast
55
2
50
492
Actual sales
50
1
50
451
Weight
8
1
1
10
WMAPE
9.1%
Thus, taking into account the importance of Monday we end up with a WMAPE of 9.1%.
6. Conclusion
In this article, we have seen 3 different ways to measure forecast accuracy and how to apply them. There is no perfect measure for every problem, but rather each measure should be chosen depending on the use case.
MAPE is commonly used to measure forecasting errors, but it can be deceiving when sales reach numbers close to zero, or in intermittent sales. WAPE is a measure that counters this by weighting the error over total sales. WMAPE is used when the use case requires to put priority in certain sales. It gives weight on the prioritized item that biases the prediction error towards it.
All these metrics are symmetric, which means that they don’t take into account whether the forecast is over-predicted or under-predicted. This can be relevant for some problems (it is not the same to have too much stock than not enough) and should be taken into account.