Working with (and within) dynamic and complex systems, we often need to forecast what could happen in the future, so we know what decisions to make today. If that wasn’t hard enough, complex systems often come with a lot more data. The large noise to signal ratio amplifies the difficulty of sorting through data points to make predictions. Nate Silver, in his book “The Signal and the Noise: Why so many predictions fail – but some don’t”, offers some explanations for why making accurate predictions is so much harder than we realize, along with advice about what we can do to evaluate predictions and increase our own odds of success.
To evaluate and create better predictions, we need to…
1. Recognize our blind spots
One of the many challenges of predictions is dealing with the natural conclusions that our mind makes. Humans are very susceptible to blind spots. To handle the overload of information we engage with it selectively or overlook what we can’t categorize. We might avoid modeling things that are difficult to model, or assume that what we don’t understand (or didn’t anticipate) has a lower probability of occurring. On the other hand, we can also recognize patterns where there are none. The first step to better predictions is recognizing that how we are perceiving the world is not necessarily a clear view of the signals. We could be ignoring signals and paying too much attention to noise. Silver defines a signal as “an indication of the underlying truth behind a statistical or predictive problem.” He defines noise as “random patterns that might be easily mistaken for signals.”
2. Go for a balanced approach
We are in the midst of a cultural love affair with Big Data and technology. And while computers can be helpful to counteract our blind spots, one of the core themes in the book is that more data and processing power is not enough to create better predictions. We still need theory and subjective information to help “translate information into useful knowledge.” If we deny that need, then we’re ignoring the role that subjective decisions and behavior play in building models and interpreting the results.
People are also prone to overfitting the data if they don’t also understand the theory behind it. In order words, they mistake the noise for a signal. The downside to this approach is that those models may look better on paper, and the influx of new reports about the noise reduces our ability to better understand the real signal behind it.
“Improved technology did not cover for the lack of theoretical understanding about the economy; it only gave economists faster and more elaborate ways to mistake noise for a signal.”
– Nate Silver, author
Hybrid approaches to prediction are much more effective. They can often help you model the potential phenomenon more accurately while balancing the weaknesses of solely quantitative or solely qualitative approaches.
3. Be on the lookout for hedgehogs and foxes
Silver mentions a great analogy when he talks about the worldviews of people making predictions. Think about these archetypes in your next meeting (or during the next political debate). It helps explain why people have drastically different pictures of the future.
The terms “Hedgehog” and “Fox” most likely come from the Greek poet Archilochus,
“The fox knows many things, but the hedgehog knows one big thing.”
– Archilochus, poet
“Hedgehogs” tend to think about the future through the filter of a few big ideas, models, or theories that describe the world. “Foxes” believe in many little ideas, and in approaching a problem in multiple ways. Hedgehogs tend to be specialized, stalwart, stubborn, order-seeking, confident, and ideological. Foxes tend to be multidisciplinary, adaptable, self-critical, tolerant of complexity, cautious, and empirical. Philip Tetlock, a professor of psychology and political science, ran a study and found that foxes were considerably better at forecasting than hedgehogs. They were more likely to recognize the amount of noise and to avoid false signals. However, hedgehogs are often vocal forecasters, using new data to support their existing narratives.
4. Think small and big
Zooming out and in can help counteract some of our natural blind spots. Often predictions fail if the forecaster didn’t consider the impacts of the context, or the dynamic nature of the system as a collection of interacting sub-systems. Feedback loops can completely alter the outcomes, whether they are self-fulfilling or self-canceling predications. That means that making the prediction itself can impact the way people behave. A good example is the case of infectious disease. Once the prediction of an outbreak is made and publicized, people tend to self-report symptoms more, but there are also precautions taken to minimize the impact.
5. Pay attention to the accuracy vs. honesty of predictions
Silver references a 1993 essay by Allan Murphy, which provided some criteria for judging weather forecasts, looking at the quality (aka “accuracy”), the consistency (aka “honesty”), and the “economic value” of the forecast. Accuracy asks if the actual outcome matched the forecast. Honesty asks if the forecast was the best one that the forecaster could have made at the time. The expression of uncertainty is also important when determining the honesty of a forecast. The “economic value” of the forecast refers to how it can be used to make future decisions. Unfortunately, not all forecasts are created equal in these three respects and sometimes the perception of accuracy ends up trumping true accuracy.
6. Consider the consequences of the forecast
Other incentives can impact which predictions make the final cut. Since forecasts have a probability of coming true, forecasters are often making a “bet” on the outcome. In that case, it’s not necessarily the accuracy of the forecast that sways your decision to share it, it’s the relative weight of the pros and cons that can occur if your prediction ends up being right (or, alternatively, is proven wrong). One interesting finding was that well-known companies are more likely to make conservative predictions, while up-and-comers tend to make more dramatic ones. Basically, the less reputation you have, the less you have to lose by making a risky prediction.
For another example, let’s look at the incentives for Wall Street traders to not go “against the herd” by predicting a falling market. In this case, you need to consider not only the outcome of the forecast but what happens if the forecaster’s bet ends up being right or wrong. If the trader bought shares and the market rises, then the trader makes money. If the market falls and the trader predicted the drop, then they might get some recognition. On the other side, if the trader buys and the market crashes, he’ll lose money but be ok because he “stuck with the herd.” However, if the trader sells and the market goes up, then he will have underperformed in comparison to his peers. The consequences of each decision are not equal, incentivizing people to make bets that don’t perfectly align with the probability of that event coming true.
7. Think in terms of probabilities
One of the core themes of the book was the idea of taking a Bayesian approach to predictions. Baye’s theorem tells us the probability that a theory or hypothesis is true if some other condition is true. Silver also mentions that some of the best predictors see the future as a series of probabilities,
“Successful gamblers- and successful forecasters of any kind- do not think of the future in terms of no-lose bets, unimpeachable theories, and infinitely precise measurements. These are the illusions of the sucker, the sirens of his overconfidence. Successful gamblers, instead, think of the future as speckles of probability, flickering upward and downward like the stock market ticker to every new jolt of information.”
– Nate Silver
If this topic interested you, I definitely encourage you to pick up the book for lots of in-depth case studies. What other insights did you pull from this book? How will you apply them in your work or daily life?