- One of the world’s largest retailers of branded and white-labeled products offers a freshly cooked product as a strategic loss leader
- Its demand forecasting for this product’s production was inaccurate because it was done manually using only a few historical data points.
- Its goal was to reduce waste, increase sales, and decrease customer dissatisfaction for this product.
- Neal Analytics worked with the retailer to build a solution that provided them a more accurate sales forecast.
- The Machine Learning powered demand forecasting solution reduced forecast error to below 10% per day at a 30-minute granularity level.
The client is one of the world’s largest retailers, offering groceries, prepared meals, and a wide variety of branded and white-labeled products such as electronics, food and beverage, clothing, home appliances, furniture, and more at heavily discounted prices.
Among other products, the retailer is well known for its freshly cooked and ready to eat meat, selling it for a price that has not changed in over a decade. It’s a strategic loss leader whose goal is to give customers an incentive to come to the store and in turn, buy more profitable items.
Already operating at a loss for this product, the retailer wanted to improve operational efficiencies and reduce demand forecasting errors to under 10% with 30-minute accuracy. However, their existing demand forecasting for this product had been done manually and wasn’t accurate enough.
The retailer’s team used the sales data from the same day the prior year (or the sales for the previous few weeks) and added 5%. This simple forecasting model lacked accuracy and led to significant product waste. Its relatively new analytics team needed a more accurate demand forecasting solution.
It was also critical, in the overall business goal-setting process, to select the appropriate target. In this situation, the models could be optimized for either waste reduction or sales maximization. The former would allow the sale of all or most of their products. It would not have to be scrapped or recycled into even lower margin products. The latter goal would have focused on consistently cooking enough products to support most or all of the consumer’s demand, but it would increase waste significantly.
The primary client goal was waste reduction; therefore, the Neal Analytics team, built a model with this primary business goal in mind.
Neal Analytics faced several challenges in its journey to build an accurate demand forecasting solution. A couple of them were:
- The client did not provide any historical data from their existing demand forecast solution, which would act as a benchmark for comparison against Neal Analytics’ predicted results.
- The client did not have processes or metrics in place to track relevant data with enough granularity that would help in predicting a more accurate forecast.
Neal Analytics’ approach to building an advanced demand forecasting solution followed these steps:
- Assess the currently available and limitations of the current forecast method
- Find the type of data to track and build a robust forecasting solution.
- Develop a plan for Neal Analytics to deliver a short-term pilot and a long-term scalable solution.
The goal was first to build a working pilot, then run it through proprietary test suites, and finally take incremental steps towards a production-ready solution.
STAGE 1: Assessment and rapid insights
This stage set clear business objectives and performance metrics. The team assessed parameters such as costs, benefits, feasibility, and timelines. It also compiled all available data sources that would be used for the solution.
STAGE 2: Working pilot
The initial model helped identify limitations and estimated the required production-level development.
STAGE 3: In-market in four store – Phase 1
An incremental model with proven results was deployed in the cloud and used to test the forecast in four stores. This model partially consumed real data from the client.
STAGE 4: In-market in four stores – Phase 2
A generalized model infrastructure was deployed on the Microsoft Azure cloud to support the first stores and prepare for scaling to more.
STAGE 5: Functionally optimized
The final production-fit solution was deployed. This solution used automated mechanisms to improve its machine learning algorithms based on new hourly and daily data.
The team used the forecasting model from the first pilot stores as a benchmark for the other three test stores. There, the solution was modified to suit the specific factors that affected each store’s demand patterns.
For instance, a store close to a large college had school breaks playing a significant role in its sales.
Because of the accuracy demands at both a daily and 30-minute granularity level, as well as the discrepancies in consumer purchasing behavior depending on the store, the Neal Analytics team had to build a multi-tier, multi-level store-specific set of models to achieve an acceptable result.
To achieve this, the demand forecasting solution from Neal Analytics has three steps.
- Step 1: Generate 30-minute forecasts for an entire day called a “sub-daily” model.
- Step 2: Generate the daily forecast by creating an ensemble of models.
- Step 3: Scale the 30-minute forecast (1) using the total of the daily forecast (2).
The sub-daily model used a cutting edge algorithm called FB Prophet (an open-source algorithm developed by Facebook) to identify the number of products that would sell at a particular time of day.
For example, the table on the right shows the type of prediction for morning sales it would produce.
With the sub-daily model, the retailer was able to identify the time during the day at which sales were highest. However, the number of units sold was not accurate enough. If the model was precise at a 30-minute level, the accumulated errors over these periods for a full day was too large to be satisfactory.
Neal Analytics built a so-called “ensemble model” for the daily model.
This ensemble model combines the outputs of three separate algorithms’ predictions: the FB Prophet, a statistical algorithm combining historical data across multiple periods, and a simple linear historical model.
The final step scales the initial 30-minute level prediction based on the ensemble daily forecast.
For instance, if the daily forecast in step 2 was 500, and 400 in step 1 (adding all the 30-minute predictions together), the new forecast, per 30 minute periods, would be 500/400, or 1.25, larger. For example, if the 9-9.30am forecast were 15 products, the final one would be 19 products (1.25*15 rounded up).
Each of the three models had strengths and weaknesses. There were also certain factors specific to a particular location. For instance, one store was close to the Canadian border. It would have a lot more Canadian customers because of better prices and less sales tax.
Some factors considered for the machine learning models were:
- Sales data from previous years
- Weather and Seasonality
- Time of year
- Local environment
- Day of the week
- Neighboring high schools and colleges
- Combination of local cultural and sporting events such as a Seahawks
- Feedback from the store staff on excel sheets to compare the actual sales and the daily forecasts from the deployed pilot.
The latter provided insights into what the model was missing or failing to understand at the most granular level. For example, a school rush would cause an ad-hoc spike in sales for a particular day at the store. These unpredictable factors would not be built into the model.
For this engagement, Neal Analytics built a total of 28 models (one model per store per day = 7 days a week * 4 stores). Before these new models, the client estimated the actual waste to be well over 10% at all stores across the US. After implementing Neal Analytics’ machine learning-powered, demand forecasting solution, the waste was consistently below this project goal of 10%.
The model also helped the client to better understand future sales trends and reduce product recycling.