Wednesday, June 2, 2010
Best Practice: Forecasting
By Dr. F. Barry Lawrence
Texas A&M University
In the previous blog, we addressed growth and sustainability while maintaining return-on-investment (ROI). Many best practices exist and have been well implemented in information systems to deal with assets and expenses. Increasing revenues is the third part of the equation and the required ROI level is based on the level of risk.
Risk is driven by forecasting. If we miss our forecast on revenues, expenses, or asset needs, we risk not hitting the ROI target. To ensure they meet their ROI requirement, firms will set a higher requirement on activities with a high potential forecast error. The most common cause of misestimating expenses and asset needs, however, is a missed revenue forecast.
The forecasting best practice has advanced dramatically since the introduction of enterprise resource planning (ERP) systems. While the process was hampered by data integrity problems, great strides have been made, and historical forecasting has become more stable. Challenges remain, however, especially where data is scarce.
The current No. 1 best-selling NAW Institute for Distribution Excellence book, Optimizing Distributor Profitability: Best Practices to a Stronger Bottom Line (available at http://www.naw.org/optimizdistprof), details best practices, their implementation, and ROI. These practices are valid in any economy, but the significance of one best practice versus another may change under different market conditions. Each month in this blog, we have introduced a best practice and how it can improve earnings and/or ROI under current economic conditions. We encourage you as you participate in this blog to ask questions, debate results, and offer your own experiences with such practices, so that we may further the knowledge of the community and the understanding of the science of distribution.
The book breaks business processes into seven groups (SOURCE, STOCK, STORE, SELL, SHIP, SUPPLY CHAIN PLANNING, and SUPPORT SERVICES) based on various distributor asset categories. This month, we focus on STOCK as shown in exhibit 1.
Best Practice: Forecasting
Forecasting best practices are divided into three areas: historical (statistical), combination, and collaborative. Commonly used statistical models have been around for many years, and most improvements have been about the forecasting process and not mathematics. Combination forecasting is a rigorous process that can only be conducted on a few key items. Finally, collaborative forecasting requires customer and/or supplier input and is even more work intensive. Human expertise can add great value to forecasting, but it can add bias as well.
The practice levels for Forecasting are as follows:
COMMON practice: A single historical forecast method applied to all products calculated by the system and frequently overridden by purchasing professionals. No use of forecast error metrics.
GOOD practice: Multiple forecasts are run and measured. The best performing forecasting model is selected based on error metrics. Combination forecasting is used where error rates are unacceptably high.
BEST practice: Combination forecasting is applied and augmented by additional information gathered through supply chain alliances with customers and suppliers (collaborative forecasting). Mathematical modeling may be applied through regression techniques.
ERP systems have allowed for pretty extensive statistical forecasting where data integrity allows. Forecasting models stretching back over 50 years are common in most ERP systems or can easily be accessed from textbooks and put into spreadsheets. These techniques include moving averages, exponential smoothing, models for seasonality and trend, and methods that compare all and choose the best performing one to use on whatever product is being forecast.
The forecast error metrics can measure the average magnitude of error (high or low), the average overall error, and many other views. The error metric (or combination of metrics) that management feels best minimizes the risk of stockout, expenses, or necessary assets (inventory) can then be used to pick the forecast model to trust.
Most distributors do not optimize this mathematical modeling process. They believe that forecasting will never be accurate due to erratic customer behavior, data integrity problems, long supplier lead times, and/or a lack of skills in their purchasing team. They instead let their IT provider set up the system with minimal input. They do not investigate causes of customer ordering patterns, root out data integrity problems, collaborate with suppliers to smooth out lead times, train their purchasing staff, or even try to understand how to measure forecasting and set requirements.
There is no question that even with the best-designed mathematical process, the resulting forecasts will still be highly inaccurate. This is no excuse, however, to not get this phase right. Once the mathematical forecast is properly designed, it acts as the foundation for all that follows. If not properly set up, it will confound all other efforts. Leaving forecasting to external parties (IT consultants) and not training your team is unacceptable.
If the forecast is not trusted, people will deploy inventory to protect against stockouts. Nobody knows the true cost of a stockout, but everyone agrees it far exceeds the cost of inventory. This problem is the number one cause of excess inventories and customer service failures. The distributor has only so many resources it can apply to serve the customer. Ineffective forecasting guarantees there will be too much inventory in some products and customer service failures in others. The role of forecasting is shown in exhibit 2.
Decreasing the Forecasting Burden
If we build the right mathematical process, the next step will be to apply human evaluation to forecasts with excessive error rates. This process is called combination forecasting and has further reduced forecast errors in applications we’ve seen by as much as 50% over mathematical models alone. Effective combination forecasting is a time-consuming process, however.
The steps are simple in concept but very difficult in practice. First, the mathematical model is run to the greatest efficiency possible. Second, the error metrics determine which forecasts are still performing poorly and submit those to purchasing experts to investigate for data integrity problems and trends that are not captured in the data (for example, sharp recessions like 2008, product shortages that will cause long lead times, etc).
Purchasing teams will not have sufficient time to conduct this type of analysis on the 20,000+ products that many distributors carry. To be feasible, the job must be reduced to only the most critical items that require attention. The first step is to use inventory stratification to reduce the purchasing task for both the human and mathematical modeling. “A” and “B” items with high error rates will have to be reviewed. “D” items should be eliminated from inventory and do not need to be forecasted. “C” items, as a previous blog suggested, should only be reordered at the supplier minimums and have little to no reorder point. The correlation between inventory rank and demand stability index (DSI) is shown in exhibit 3.
This process leaves only “A” and “B” items to be forecasted. Since most distributors, who have properly implemented inventory stratification, find more than 70% of their inventory to be “C” or “D,” this process tremendously reduces the forecasting task. “A” items are well behaved and will typically have low error rates. Properly setting inventory stratification is critical, however, to prevent too many items from being put into the “A” and “B” categories. The forecasting framework is shown in exhibit 4.
Forecasting with Unreliable Data
When there is no history or the data is scarce, forecasting becomes extremely difficult. New product introductions, new customers, new territories, and other growth issues covered in May’s blog are very troubling to forecast. If we do not forecast, however, we have no idea what resources will be needed, and we can’t estimate what assets will be used or what expenses will be incurred. We will also not be able to predict what revenues will be produced. This lack of information will produce a high level of risk driving up our ROI requirement, and thereby killing many initiatives.
Forecasting without historical data is difficult but not impossible. The next level of forecasting, collaborative, is rarely used for established products due to its work- intensive nature and alliance problems. Collaborative forecasting is work intensive, since collecting information from customers and suppliers is time consuming and requires human and system handoffs of data that will produce data-integrity issues.
Some large customers have used collaborative forecasting by giving their suppliers the customers’ forecasts and expecting them to follow them. Many distributors report instances where the forecasts were so inaccurate that the motive seemed to be to inflate the distributor’s inventory rather than to make the supply chain more efficient.
Suppliers often have valuable information to share as well. Since a supplier can see how a new product or customer base performs in other regions, they can provide a distributor with valuable information. Many times, however, a supplier, to encourage a distributor to invest, will inflate these numbers.
Other sources can include government-collected data (for example, housing starts, population growth, production numbers, etc.) or data from previous investments of a similar nature.
Collecting these various data sources is work intensive, and the forecasting results may be questionable due to the many potential sources of error. Therefore, collaborative forecasting is rarely used for standard forecasting processes, but is the only alternative when data is not available. Two common ways to calculate a collaborative forecast are human estimation and statistical regression.
Human estimation is most popular, since it is quick and flexible. Contrary to popular belief, it can also be fairly accurate. The key is accountability. When people engage in forecasting, the information delivered to them must be as accurate as possible, and their results must be measured. Measuring human forecast error will create a process whereby the expert will investigate which data gave them the best results and how to improve other sources of data.
Statistical regression is a mathematical process that conducts a very similar analysis. All numerical data (for example, supplier estimates, customer forecasts, similar investment results, government data, etc.) are fed into a mathematical model that will produce a forecast and simultaneously will determine which information sources contributed to forecast accuracy and which did not. Those that did not contribute can be improved for better results or can be eliminated, reducing the data collection workload.
A Cautionary Tale
A building materials distributor developed a complex combination forecasting method. Data was fed into the system and multiple forecasting models were run on the data. The forecasting model that produced the lowest error rate was applied and the best result was continuously measured. Where forecast error on “A” and “B” items was higher than 30% and 40% respectively, the forecast was flagged for purchasing to review. The process dramatically reduced forecast error and inventory at first.
The settings for what qualified as “A” and “B” inventory were too loose, however, and the task soon became overwhelming for purchasing. Purchasing experts were also not trained on how to evaluate forecasts and data. The team began to trust the system even when they should not have, and many high-profile forecasting errors took place ,causing many questions to be raised. It was easier to blame the IT system than to accept blame for the lack of training and a less-than-rigorous human process.
The decision was to discontinue the process and go back to the informal forecasting process. Inventories increased and profitability dropped. New management came in and hired a new set of consultants to improve forecasting. A new IT solution, nearly identical to the previous one, was introduced, again without proper training, settings, measurements, etc. Since many people remembered the signs of failure and lacked confidence in the new application, it took even less time to fail.
A Better Result
A chemical distributor developed a forecasting process that encompassed the sales force, customers, and suppliers. The distributor sold different products in different regions, since they sold primarily to large customers. Each salesperson, therefore, had a few customers buying a very few products.
The distributor had alliances with suppliers where they accounted for more than 80% of the supplier’s production. If the distributor’s forecast was inaccurate, the supplier had little chance of helping the distributor recover. Since the products were very specific, perishable, and could not be acquired elsewhere, it was imperative that the forecasting was accurate.
The sales force was presented every three months with a historical forecast for the products they sold in their region. The forecast was spreadsheet-based, since no information system could support the process that followed. The salesperson was expected to work with customers and reach an agreement on what would be produced and consumed. The salesperson was evaluated and his income was directly tied to the accuracy of the forecast.
Since the customer was consulted and understood the gravity of missing the forecast, forecast error was very low and manufacturing scheduling effective.
Train, Train, and Then Train Some More
Forecasting is one of the most information-intensive processes. While information systems are where it all begins, human interaction is required for critical products and uncertain environments. Well-designed systems, proper collaboration, and accountability, all play their parts, but nothing as complicated as forecasting can succeed without well-trained sales teams and purchasing teams.
About this Blog
“Managing in an Uncertain Economy” is a blog created by the Council for Research on Distributor Best Practices (CRDBP). The mission of the CRDBP, created by the NAW Institute for Distribution Excellence and the Supply Chain Systems Laboratory at Texas A&M University, is to create competitive advantage for wholesaler-distributors through development of research, tools, and education. CRDBP encourages readers of this blog to send in comments and e-mail this blog to other interested parties.