It happens every time I interview a content expert to elicit forecast assumptions. That look of instantaneous, if fleeting, PANIC. This is quickly replaced by watching, or listening to, their internal dialog arguing back and forth between "it's higher than that" and "no, it's not going to be that high", and finally, "I don't know...I'd just be making it up"!
Forecast models need assumptions. The investment in obtaining those assumptions will vary according to the nature of the decisions being driven by the forecast. For example, early investigations into the value of an opportunity often seek to determine if "it is bigger than a bread box". These types of projects often rely on individuals knowledgeable about the market being modeled to make judgments about the future. As the decision process proceeds to contemplating significant investments, more robust methods may be warranted (i.e., primary research). In some projects, there may be historical market data that can inform estimates about the future (i.e., category adoption). Regardless of the stage, scale or scope of the decision, the forecast will always have some assumptions that require expert judgments to be made about input values.
So, if YOU are called upon for your expertise, how can you avoid feeling like you are making up values that could have significant consequences...especially if you are WRONG?! First, recognize that even one-day-ahead predictions have a high likelihood to be wrong. Making predictions for events or values years from now may seem like you are trying to sculpt fog. Follow these tips to feel comfortable in your forecast input estimates, and avoid being wrong:
Determine the plausible bounds of uncertainty: There are logical bounds to what a value can take. In the extreme, a percentage is bounded by 0 and 1. In most cases, we can reduce this range to something more meaningful. For example category adoption for a new technology. Of course, lots of technologies fail to launch; but in most forecasting projects for novel technologies we are trying to determine the value of success. In these situations your familiarity with the market will be a critical factor. What is the adoption that would lead to a minimum acceptable level of commercial success? What is within the realm of plausibility if it becomes an industry standard?
Think about where the most likely value is within that range: Now that we have the range of possibility thought through, we need to position the most-likely value somewhere between those bounds. It is tempting to place it centrally, but in most instances this will be an overestimate. Let's return to our example of category adoption. A lot of stars have to align in order to achieve the upper boundary of possibility. Just a few things fail to happen and you begin moving toward the lower bound. Most of the time in instances like this the most likely value is closer to the lower bound than to the upper, even if only slightly so.
Think about, and describe the contingencies associated with your estimate of the most likely case. In arriving at your most-likely case, you will have thought though some "if this, then that" scenarios. This is valuable information for the forecaster. Think about the things that might happen that would increase or decrease your estimate. These may become events represented in the model to drive what-if and sensitivity analyses.
These simple steps will free you from the discomfort that comes with feeling like you are being asked to come up with a "right" single estimate about the future. And they provide the forecaster with powerful information to drive the forecast and subsequent analyses that depend on it.