At a pharmaceutical portfolio management conference I attended, I took the opportunity to introduce myself to other attendees, most of whom were from major pharmaceutical companies, and who were responsible for product portfolio planning. When referring to strategic (5+ years) models, the single most common comment I heard during session breaks was "All of this portfolio management stuff is great but how useful are these tools if nobody trusts the forecasts?" We at MedTech Valuation, Inc. are asked all the time how we validate our models. My answer depends on the type of model.
If we are fitting a model to a large set of historical data to project a short term forecast (such as weekly or monthly sales), we use hold-out data against which to test the predictive validity of the model with out-of-sample data. It's easy to fit a model very well to the data used to create it; but it is the performance in predicting FUTURE periods that is important (data that were not used in the fitting of the model). We typically fit models using several methodologies and precision levels and select those that fit the out-of-sample data best.
For strategic models, it is a different ball game entirely. Especially if it is the pre-launch, or early adoption stage for the product being forecast. In these cases, there are no data to "hold out" in order to validate the model. So how do you ensure that your model is valid and trustworthy as a forecast? The answer is it's all in the process. The right forecasting process along with solid mathematical representations of market dynamics can provide reliable, defensible representations of the future. When forecasting at MedTech Valuation we follow these simple guidelines:
First, define the purpose of the forecast. Is it to value a licensing and acquisition opportunity? Is it for supply chain planning? Is it for setting expectations for strategic planning? While the forecast process and model will likely be the same regardless of purpose, the outputs and user interface may vary significantly between them. This step will help you design an interface that allows users to find the answers and run scenarios specific to the need at hand.
Determine the level of detail at which the outputs need to be represented. The level of detail should not be greater than the availability of reliable sources for assumptions to drive it.
For each component of the model (population, category adoption, market share, pricing, etc.) source the assumptions carefully. While there are publicly available data available for some of these components, many assumptions must be made by judgement. For each input select authorized sources carefully. Who is the BEST person or persons to make specific assumptions? Assumptions are, well, assumptions. Document source, date and rationale for each of them. This will allow you to evaluate which assumptions drive how much error relative to actual performance later. For example, did you assume 20% more investment in marketing than actually happened? How much did that contribute to the difference? Did you assume a launch date 6 months earlier than the actual launch? This information is invaluable in explaining why the forecast was more or less than actuals (it will never be 100% accurate) and, most vitally, inform the process for maintaining the forecast in the future.
Incorporate uncertainty. For key assumptions establish ranges that represent the level of uncertainty surrounding them. This can provide the means by which to do rigorous scenario risk and sensitivity analyses. Even better, the ranges can be used to drive Monte Carlo style simulations for a full forecast risk analysis.
Quality assurance. Strategic forecast models are often comprised of several thousand rows of code, and tens of thousands of calculations. While math is math, each calculation is an opportunity for error. Industry estimates of calculation error in spreadsheet models vary from 10% - 18% even among seasoned modelers. At MedTech Valuation we minimize the opportunity for error by having libraries of tested and proven mathematical functions and model components. This decreases significantly the opportunity for error. Further, we have our models reviewed by an independent forecaster on our team to catch any calculation issues that may have occurred during model construction.
Validation of forecast models is of critical importance, especially when the decisions being made based on them involve significant consequences. The validation process is very different for short term tactical forecasts that are informed by large amounts of historical data, versus long term strategic forecasts where there are no data to validate against. In the latter, it is the PROCESS that must be carefully monitored and validated in order to ensure the best representation of future expectations, given the information available today.