This article assesses how predictions have performed in five fields. It argues that poor projections have propagated throughout our society and proliferated throughout our industry. It argues that our fixation with forecasts is fundamentally flawed.
So instead of focussing on the future, let’s take a moment to look at the predictions of the past. Let’s see how our projections panned out.
I'll have to read more of it later but the short version is that I agree massively. Projections and predictions are the crutch of modern society, where no one wants to make a judgement call so...
I'll have to read more of it later but the short version is that I agree massively. Projections and predictions are the crutch of modern society, where no one wants to make a judgement call so instead they dress up numbers as facts with flawed methodology.
In these particular cases there's more going on than just someone trying to show that next year will be higher revenue than this year, but it's a similar issue. A MAJOR problem i've seen (especially in the political sphere) is the adherence to the old doctrine at any cost.
For a time we had pretty accurate election predictions. Using modern (at the time) polling methods and good information gathering we were able to get very accurate results for any election that wasn't hyper close.
And then things changed. The rise of the internet, the fact that answering a polling phone call is something that just isn't done by a huge swath of the population, an even larger push to making news about entertainment rather than accuracy, and many other confounding factors. But you'll still see people make bold claims based on older methodology in a time of turmoil (where even stuff that worked 8 years ago looks like it no longer tracks).
So in relation to the topics, I'm not surprised to see recessions being a big big issue. It's a little odd because you could predict a recession if you want to cause one, but the goal is always to try and avoid it. Further you can often predict that something WILL happen, but not when. Sorta like fault lines and earthquakes, it's just a matter of time, but nailing the timing requires too many variables to really know what to do. The '08 crash was not a surprise as many people had been predicting it for years, but that's the issue. It had been years. So many factors had to line up for consequences of the bad policies to become impossible to ignore.
Add on major world changing events like the reality check to globalization with the Russia/Ukraine war China/Taiwan issues and COVID and most of the old methods just aren't nearly as reliable as they were.
Hm. It's been the reverse, though. Local elections have been the most accurate, as it seems whether or not Trump is involved, directly or not, is the marker for when traditional polling methods...
Hm. It's been the reverse, though. Local elections have been the most accurate, as it seems whether or not Trump is involved, directly or not, is the marker for when traditional polling methods start to have issues. And the leadup to the presidential, assuming you're talking about the elections in the last year or so, have been absolutely spot on by pollsters.
If you're talking about the actual presidential election, I mean, it hasn't happened yet, so it's a bit early to talk about accuracy.
I wish I had time to dig through the links in the article because the way it's written sounds a bit unfair to the prognosticators. For example, the IMF predicted 4/469 economic downturns in 196...
I wish I had time to dig through the links in the article because the way it's written sounds a bit unfair to the prognosticators. For example, the IMF predicted 4/469 economic downturns in 196 countries over 30 years. How many of those downturns were they trying to predict? I get what they do, but I can't imagine they try to predict every single economic downturn. So I just wish I could check every link to see the details.
I've always ignored those kinds of predictions and considered them worthless, but with so many sources and such vague descriptions, I can't take this as anything more than confirming my biases.
For those interested in this and looking for more, highly recommend the books by Nassim Taleb if you haven't already read them. His œuvre is looking for an answer to the question "if we're bad at...
For those interested in this and looking for more, highly recommend the books by Nassim Taleb if you haven't already read them. His œuvre is looking for an answer to the question "if we're bad at predicting things, what do we do instead?"
Yes. The concept of the black swan, and distinguishing between events with a normal probability distribution as opposed to events where probability is much less predictable is something that I...
Yes. The concept of the black swan, and distinguishing between events with a normal probability distribution as opposed to events where probability is much less predictable is something that I learned first from Nasim Taleb.
Three other books that would be helpful to people who want to learn about predictions of risk are 1. Being Wrong Adventures on the Margin of Error, 2. How Big Things Get Done by professor Bent Flyvbjerg, 3. Algorithms to live by by Brian Christian and Tom Griffiths.
I learned about the last two books from people on Tildes. They are all excellent, interesting and informative.
I really enjoyed reading Anti-Fragile. I read a few of his other books but found one to be my favorite. However, I always caution people when I recommend them. His writing style is often in the...
I really enjoyed reading Anti-Fragile. I read a few of his other books but found one to be my favorite. However, I always caution people when I recommend them. His writing style is often in the form of vignettes and anecdotes to support his points, and he is rather arrogant both in his writing style and towards academics with differing viewpoints. I recommend Anti-Fragile and find many of his points persuasive. However, I also think that academics like Stephen Tetlock, who focuses on how to predict things better, are needed to bring a balanced view of things. We should try to predict that we can while building systems to sustain shocks that can't be predicted.
It's funny - I'd looked up the classic Yogi Berra quote, "It's tough to make predictions, especially about the future", early this morning in another context. Must be my precognition acting up...
It's funny - I'd looked up the classic Yogi Berra quote, "It's tough to make predictions, especially about the future", early this morning in another context. Must be my precognition acting up again. 😉
The underlying problem with forecasting recessions, GDP, interest rates, exchange rates, and other econometric indicators is that they're all summary measures. Huge numbers of variables with differing magnitudes of effect feed into them, and expertise in economics alone won't provide enough insight. A war or pandemic (Nassim Taleb's Black Swan event) will upend many of the stability assumptions underlying the vanilla prediction.
I've been toying a bit with prediction markets, like Manifold (warning: bro country, mostly a playground for AI boosters) and Good Judgment Open, which is based on the work of Philip Tetlock, who's mentioned in the article. The Good Judgment Project is a business spin-off of Tetlock's research. It has some mildly exploitative commercialization of forecasting and training classes, but I was thinking of taking their Basics class for fun.
Superforecasting, the Art and Science of Prediction, is an interesting, easy to digest read. Tetlock and Nassim Taleb wrote a useful paper [PDF warning] together on risk prediction with binary (yes/no) and variable payoffs. However, there's ongoing beef between them about the statistical methodology and risk prediction power of the binary prediction market type used most frequently in Good Judgment Open. Tetlock's purposes in Good Judgment forecasting differ substantially from Nassim Taleb's interest in risk quantification. This is all fairly technical and abstracted from the OP, but it does give an inkling of why human brains are bad at forecasting when left to their own devices and biased by the illusion that expert knowledge in one domain is inherently transferable.
These five fields are:
I'll have to read more of it later but the short version is that I agree massively. Projections and predictions are the crutch of modern society, where no one wants to make a judgement call so instead they dress up numbers as facts with flawed methodology.
In these particular cases there's more going on than just someone trying to show that next year will be higher revenue than this year, but it's a similar issue. A MAJOR problem i've seen (especially in the political sphere) is the adherence to the old doctrine at any cost.
For a time we had pretty accurate election predictions. Using modern (at the time) polling methods and good information gathering we were able to get very accurate results for any election that wasn't hyper close.
And then things changed. The rise of the internet, the fact that answering a polling phone call is something that just isn't done by a huge swath of the population, an even larger push to making news about entertainment rather than accuracy, and many other confounding factors. But you'll still see people make bold claims based on older methodology in a time of turmoil (where even stuff that worked 8 years ago looks like it no longer tracks).
So in relation to the topics, I'm not surprised to see recessions being a big big issue. It's a little odd because you could predict a recession if you want to cause one, but the goal is always to try and avoid it. Further you can often predict that something WILL happen, but not when. Sorta like fault lines and earthquakes, it's just a matter of time, but nailing the timing requires too many variables to really know what to do. The '08 crash was not a surprise as many people had been predicting it for years, but that's the issue. It had been years. So many factors had to line up for consequences of the bad policies to become impossible to ignore.
Add on major world changing events like the reality check to globalization with the Russia/Ukraine war China/Taiwan issues and COVID and most of the old methods just aren't nearly as reliable as they were.
We still do. Polls nailed the midterms, and special elections in the last 4 years.
Depends on the scope. Local elections are less accurate, and the entire lead up to the presidential is the least accurate its been in awhile.
Hm. It's been the reverse, though. Local elections have been the most accurate, as it seems whether or not Trump is involved, directly or not, is the marker for when traditional polling methods start to have issues. And the leadup to the presidential, assuming you're talking about the elections in the last year or so, have been absolutely spot on by pollsters.
If you're talking about the actual presidential election, I mean, it hasn't happened yet, so it's a bit early to talk about accuracy.
I wish I had time to dig through the links in the article because the way it's written sounds a bit unfair to the prognosticators. For example, the IMF predicted 4/469 economic downturns in 196 countries over 30 years. How many of those downturns were they trying to predict? I get what they do, but I can't imagine they try to predict every single economic downturn. So I just wish I could check every link to see the details.
I've always ignored those kinds of predictions and considered them worthless, but with so many sources and such vague descriptions, I can't take this as anything more than confirming my biases.
For those interested in this and looking for more, highly recommend the books by Nassim Taleb if you haven't already read them. His œuvre is looking for an answer to the question "if we're bad at predicting things, what do we do instead?"
Yes. The concept of the black swan, and distinguishing between events with a normal probability distribution as opposed to events where probability is much less predictable is something that I learned first from Nasim Taleb.
Three other books that would be helpful to people who want to learn about predictions of risk are 1. Being Wrong Adventures on the Margin of Error, 2. How Big Things Get Done by professor Bent Flyvbjerg, 3. Algorithms to live by by Brian Christian and Tom Griffiths.
I learned about the last two books from people on Tildes. They are all excellent, interesting and informative.
I really enjoyed reading Anti-Fragile. I read a few of his other books but found one to be my favorite. However, I always caution people when I recommend them. His writing style is often in the form of vignettes and anecdotes to support his points, and he is rather arrogant both in his writing style and towards academics with differing viewpoints. I recommend Anti-Fragile and find many of his points persuasive. However, I also think that academics like Stephen Tetlock, who focuses on how to predict things better, are needed to bring a balanced view of things. We should try to predict that we can while building systems to sustain shocks that can't be predicted.
It's funny - I'd looked up the classic Yogi Berra quote, "It's tough to make predictions, especially about the future", early this morning in another context. Must be my precognition acting up again. 😉
The underlying problem with forecasting recessions, GDP, interest rates, exchange rates, and other econometric indicators is that they're all summary measures. Huge numbers of variables with differing magnitudes of effect feed into them, and expertise in economics alone won't provide enough insight. A war or pandemic (Nassim Taleb's Black Swan event) will upend many of the stability assumptions underlying the vanilla prediction.
I've been toying a bit with prediction markets, like Manifold (warning: bro country, mostly a playground for AI boosters) and Good Judgment Open, which is based on the work of Philip Tetlock, who's mentioned in the article. The Good Judgment Project is a business spin-off of Tetlock's research. It has some mildly exploitative commercialization of forecasting and training classes, but I was thinking of taking their Basics class for fun.
Superforecasting, the Art and Science of Prediction, is an interesting, easy to digest read. Tetlock and Nassim Taleb wrote a useful paper [PDF warning] together on risk prediction with binary (yes/no) and variable payoffs. However, there's ongoing beef between them about the statistical methodology and risk prediction power of the binary prediction market type used most frequently in Good Judgment Open. Tetlock's purposes in Good Judgment forecasting differ substantially from Nassim Taleb's interest in risk quantification. This is all fairly technical and abstracted from the OP, but it does give an inkling of why human brains are bad at forecasting when left to their own devices and biased by the illusion that expert knowledge in one domain is inherently transferable.