I spent many years working with customers who run with two - yes two - forecasts and for excellent reasons. The first is usually based on the financial budget for the year, which is copied and adjusted to become the active forecast. The second is a demand forecast that drives replenishment and is usually based on detailed analysis of actual activity.
"The way to get started is to quit talking and begin doing” – Walt Disney
The financial budget/forecast of exhibits the following characteristics:
The demand forecast has the following characteristics:
Comparison between the Demand Forecast and the Financial Budget/Forecast will bring early warning of financial uplift or shortfall. However, the demand forecast is sensitive to activity and does not represent a target; this is the mandate for the budget / financial forecast. People are accountable for the financial performance of the organisation and the fluctuating forecast generated by Demand Forecasting application does not represent a target. Don’t lose this messaging through over engineering comparison, but ignore at your peril the insight provided by actual activity.
The Demand Forecast is often based on a plethora of calculations, masses of historical data and is not always well understood across the organisation. The algorithms applied can be difficult to digest and range from the simple average to the more complex exponential smoothing and mean average deviation, clearly the remit of our more numerate colleagues. However, when you distil the methods down they equate to “if your forecasting is accurate, you need less safety stock and if inaccurate then a bigger reserve is required”. I know this is a gross oversimplification but the core objective of demand forecasting is to support the right product, in the right place at the right time. Sophisticated forecasting engines will apply every algorithm to every product, every night and select the best fit (most accurate) when compared to previous demand. The amount of data captured to achieve this can be colossal. My advice is to ask yourself:
We are already at the point where the amount of information held in Demand Forecasting systems outstrips an organisation’s ability for inspection, refinement and override. Organisations invest in Demand Forecasting applications and exception-based reporting in the first place based on this very fact. The computer can process the data more quickly, more accurately and more cost-effectively than people. Furthermore, many demand forecasting applications are product-centric and roll-up sales order lines to the product level as the basis for projection. Although common, this approach slices off the single most important thing in your business: the customer. Organisations often end up holding similar data in marketing centric applications to better understand customer behaviour. What is needed is a single source of truth used for multiple purposes. Do the maths using an example:
Number of Products: 100,000
Number of Trading Days: 365
Number of Sales Locations: 200
Number of Years’ History: 3
Number of Warehouses: 5
Combinations; 100,000 * 365 * 200 * 3 * 5 = 109,500,000,000
Admittedly, the number is a worst case scenario as not all products sell in all locations on all days, but the point is the number is large. Now factor in multiple forecast algorithms forward projected over the next 12 weeks, and you start to understand the size of the datasets involved.
However, this is the tip of the iceberg; the hard bit is where to spend your precious budget to optimise your revenue and mix. How can this decision be made without knowledge of the customer in product-centric forecasting applications?
What we need is the ability bring together the following:
Once assembled, these types of datasets are ripe candidates for decision support/machine learning applications that help further reduce the size of the “exception report” presented to colleagues. The underlying problem with exception reporting is the number of exceptions generated, and this can often outstrip the organisation’s ability to process the exception list. In such circumstances, the exceptions are ignored, or we end up with the concept of prioritised exceptions. Are we having a laugh?!
Current advanced in Big Data and Machine Learning represent an exciting opportunity in the Supply Chain. Without being overly IT(ish), Big Data brings significant advances in computer technology that allows huge datasets to be processed in a massively parallel manner. Machine Learning places more trust in the decisions recommended by the computer; more automated acceptances reduces the "exceptions list". While I would not dream of competing with experts with proven track records in such technical areas, finding a valuable business use for such technology is sitting in front of us.
For the record, the datasets associated with forecasting are considered “smallish” when compared to complex Big Data / Machine Learning models in biochemistry or mechanical engineering, for example; and the challenge for the commercial enterprise will be finding Executive Sponsorship for such endeavours.
If your organisation truly wants to be disruptive, give Crimson a call.
Crimson have a tried and trusted 4-step processes to enable an organisation to capitalise their data assets that avoid IT jargon, focussed on business need. These are four steps:
I am hopeful that this blog post gives some insight into the reasons why our customers invest in data, but without a good understanding of why it is needed, my advice is don’t start.
However, if you wish to engage Crimson in a free consultative workshop to help bring clarity to your thinking, you can follow this link and register your interest in Crimson’s free Data Pathfinder Workshop. We don’t claim to have all the answers, but we have a proven track record of bringing genuine benefit from new technology and are willing to work in partnership with our customers and other supply organisations to deliver a positive business outcome.