Article
Production deployment of time series forecasting
Jan 06, 2021 · Authored by
Time series analysis is as vast a topic as time itself, and as such, there is far too much information to cover in one article. This article is the second part in a series to go over the many intricacies of time series analysis. Read part one, Incorporating time series analysis into your business, to learn more.
Many businesses utilize time series analyses to assist with their operational planning, such as conducting quarterly sales forecasts to fine-tune sales targets. In other cases, time series analyses is an exploratory exercise that informs a specific business decision, such as forecasting customer growth to gauge upcoming customer service hiring needs. Both scenarios are examples of manual, pre-planned time series analysis, which usually require advance planning and dedicated time from a data scientist. The data scientist spends a significant amount of effort to collect and cleanse data, tune the model and generate and present the results. Ultimately, this means decision makers must do a lot of one-off work because analysts will have to re-calibrate all of the data to repeat the process manually. Lacking access to data when needed can hinder a business’s ability to properly preparing for the future. Establishing steps to eliminate the need for manual processes and go beyond pre-planned one-off analyses is vital for a business to respond quickly to market changes and adapt.
Building a data pipeline
Establishing a data pipeline is the first step to getting the right data. A data pipeline must capture the variables of interest, while also arriving on time in a consistent format. The foundation for a relevant and accurate data analysis includes timely and high quality data. A data pipeline takes data from one or more sources and transforms that content to develop a new data set that can be leveraged for specific analysis. For example, sales figures can be uploaded and processed by a service to be loaded into a dataset. Once the data set is established, a business can examine trends in their data to better guide future sales decisions as it relates to their products across all aspects of their business. Services such as Amazon Web Services (AWS) Glue and Azure Data Factory, as well as open source systems such as Apache Airflow or Luigi, are common choices for building a data pipeline. By implementing a data pipeline, a business can build their foundation and enable on-demand forecasts to be produced by guaranteeing that inputs are available.
Generating forecasts in the future and on demand
The nature of time series data poses unique challenges to implementing models in production. First, time series typically exhibit a trend – the average value increases or decreases over time. Trends in the data are likely to change in the long term. The length or frequency of seasonal patterns may expand or contract as the underlying process changes. Forecasts generated from a model fitted today may not be accurate when applied to future data.
Second, time series models are recursive: the forecast for any point in time is dependent on prior forecast errors (1). Due to potential changes to cycles or trends in the data, the model should always be refitted when examining a new segment of time – even if it is only the addition of a single day to the dataset.
If forecasting frequently, such as every day – is the goal, labor-intensive model evaluation and tuning will be costly. At the same time, tuning and refitting is a necessary part of the process due to the challenges described above. With modern processing power, manual modeling can be replaced by a grid search in which we test combinations of model parameters. Using this method, we comprehensively evaluate and select model parameters on the fly, choosing the model that performs the best using a predefined metric (2).
Evaluating and refining models
In order to automate the process while ensuring a high-quality model, it is imperative to choose a metric that can optimize the model selection for the problem at hand. Amongst the myriad metrics to pick from for building and training models, there is no single best metric and each one has strengths and drawbacks. A few common metrics are described below, each with its strengths and drawbacks:
- AIC penalizes a model with too many parameters. The smaller the AIC score, the better fit the model will be to the data. This criterion tends to select a model that slightly over fits the data (3).
- BIC penalizes a model that uses large sample sizes as well as penalizes a model with too many parameters. This can lead to model selection that can slightly under fit the data (4).
- HQC is similar to BIC in picking models. However, it can be prone to overfitting the data when sample sizes are not large (5).
No metric is perfect; selecting one to fit your business needs may be an iterative process. Carefully weighting the strengths and drawbacks of each metric when selecting a model can better ensure you are setting your data and business up for success.
Communicating uncertainty
Forecasting with a time series model allows for reasonable glimpses into the future. As with all forecasting, we must understand the inherent uncertainty built into the forecasts so that we can communicate accurate predictions to end users while accounting for variability. When providing a range of predicted values, we can be confident that a larger range will include the true future value – or we can narrow our predicted range, but be less confident that the range will hit the mark. Generally, when providing a forecast, we must balance the ability for the audience to trust the forecast while limiting the range of possible values enough that they can act on those forecasts. It is important to display the ranges chosen within visual representations of the forecasts to provide a common understanding of how the model describes future values.
Scaling up and out
Some degree of error is inevitable when using a model to generate forecasts into the future. How can we minimize this error to optimize accuracy and precision of predictions? One of the easiest ways is to step back and re-address the scope of the problem. Usually, we are interested in general question such as “In which months did we have the most sales?” These general questions can be made more specific by instead asking, “In which months did we sell the most of item A within the Midwest region? Differentiating between geographies, demographic groups and item categories to model a single variable against time will help provide stronger results in forecasting by reducing the number of variables affecting our prediction. Variances in forecasts can be reduced by disaggregating the scope and performing multiple forecasts over subsets of the input data.
One drawback of increasing granularity of the data is that instead of building a few different generalized time series forecasts, we are deploying hundreds of models. Performing that many forecasts after categorizing the fields can seem daunting. However, through parallelization techniques we can streamline the process of building, retraining and forecasting the models so long as the categories are independent from each other. This reduces manual intervention and large build times while generating consistency throughout the model building procedures.
Our professionals are knowledgeable and prepared to assist your business. While keeping your future business goal at the forefront of every decision, we can help you properly leverage your data and engage in forecasting through time series analysis.
References
- Forecasting: Principles and Practice
- Grid Search-Based Hyperparameter Tuning and Classification of Microarray Cancer Data
- Time Series Analysis with Applications in R by Jonathan D. Cryer and Kung-Sik Chan
- Time Series Analysis with Applications in R by Jonathan D. Cryer and Kung-Sik Chan
- Comparison of Criteria for Estimating the Order of Autoregressive Process: A Monte Carlo Approach