Interesting read as it may be, this seems to be a long-winded stream of consciousness in discovering that probability is the abstractive process of averaging averages. There are also some failures...
Interesting read as it may be, this seems to be a long-winded stream of consciousness in discovering that probability is the abstractive process of averaging averages. There are also some failures in logic, e.g.:
Even if there is a statistical model for what should happen, this is always based on subjective assumptions — in the case of a coin flip, that there are two equally likely outcomes. To demonstrate this to audiences, I sometimes use a two-headed coin, showing that even their initial opinion of “50–50” was based on trusting me. This can be rash.
This is a prime example of the dangers involved in a limited sample size, the author should know better in that probability only works reliably at scale. If a large sample of coin-flippers had been tested rather than just one there might be something worthwhile, but as it is, n=1 where n is proven untrustworthy.
The example you cite seems to be written in a way that the average person can understand the idea. What you are saying isn't wrong, but it is addressed by the author further in the article when...
The example you cite seems to be written in a way that the average person can understand the idea.
What you are saying isn't wrong, but it is addressed by the author further in the article when they discuss frequentist probability.
My point was that the failure was not of the observers in trusting that the coin was standard, but of the person using disingenuous tactics to subvert people's reasonable expectations. Natural...
My point was that the failure was not of the observers in trusting that the coin was standard, but of the person using disingenuous tactics to subvert people's reasonable expectations. Natural events or any other observable subject matter that is studied for probability do not have to contend with a factor of subversive intervention because the powers at be wish to prove a point.
Why is it a reasonable expectation for coins, though? Probability works for coins because people made it work. That is, governments made it work by manufacturing huge numbers of standard coins, so...
Why is it a reasonable expectation for coins, though? Probability works for coins because people made it work. That is, governments made it work by manufacturing huge numbers of standard coins, so we all have familiarity with them.
Yes, words are always fuzzy and not precise and based on assumptions and context. Misleading people with inprecise words is not an interesting insight into probabilities. Of course probabilities...
Yes, words are always fuzzy and not precise and based on assumptions and context. Misleading people with inprecise words is not an interesting insight into probabilities.
Of course probabilities depend on knowledge and assumptions. Trying to quantify uncertainties is the whole point of probabilities, and what is uncertain depends on the available knowledge. When different things are uncertain the estimated probabilities are likely different. It's not surprising since the probabilities measure those uncertainties.
I enjoyed the read, and suspect me and the author could argue endlessly over the details and coffee. A few things that jump out to me: I don't agree. A 50-50 model based on an observed coin with a...
I enjoyed the read, and suspect me and the author could argue endlessly over the details and coffee. A few things that jump out to me:
There is another lesson in here. Even if there is a statistical model for what should happen, this is always based on subjective assumptions — in the case of a coin flip, that there are two equally likely outcomes. To demonstrate this to audiences, I sometimes use a two-headed coin, showing that even their initial opinion of “50–50” was based on trusting me. This can be rash.
I don't agree. A 50-50 model based on an observed coin with a past performance of behaving as a fair coin isn't subjective. Like someone else said, this is a failure to validate.
Extending this to a weather model, he says that either it rains, or it doesn't. Which is true. But the probability being generated is the past frequency of rain with the same or similar model inputs. The reference is objective, being past model performance. Whether it is accurate is different than it being subjective.
Probability, indeed, can only rarely be said to ‘exist’ at all.
...
My argument is that any practical use of probability involves subjective judgements.
What he doesn't really address is that probability is an emergent property. Whether it is based on subjective vs objective inputs depends on the thing being modeled. We regulate games a chance using objective statistical rules. We set insurance rates using actual tables driven by fairly objective criteria. Then on the other side of the spectrum you have fantasy sports and expert event forecasting, which include lots of subjectivity.
I would recommend the books of Stephen Tetlock for fun reads on the accuracy of subjective forecasts.
As a person who deals with lots of numbers on a daily basis for work, and often times in this very space of risk assessments and quantifying "subjective" probabilities, I found this to be an...
As a person who deals with lots of numbers on a daily basis for work, and often times in this very space of risk assessments and quantifying "subjective" probabilities, I found this to be an awesome read.
"This is remarkable: it shows that, starting from a specific, but purely subjective, expression of convictions, we should act as if events were driven by objective chances."
I was going to mention those! So I'll suggest Against the Gods instead, which is sort of a pop history exploration of mankind's evolution of the concept of probability.
I was going to mention those! So I'll suggest Against the Gods instead, which is sort of a pop history exploration of mankind's evolution of the concept of probability.
It's an interesting article, but I always find myself wishing for more technical detail with this sort of thing, or at least an easy way to find it (more jargon, for example, to aid in Googling)....
It's an interesting article, but I always find myself wishing for more technical detail with this sort of thing, or at least an easy way to find it (more jargon, for example, to aid in Googling). Without that, I feel like the article is written more to make me feel like I've learned something rather than to actually teach me.
Maybe another example to help? We assume standard, unloaded, and properly weighted dice. These are the core assumptions that we normally take for granted, but the article argued for also...
Maybe another example to help?
We assume standard, unloaded, and properly weighted dice. These are the core assumptions that we normally take for granted, but the article argued for also specifying these assumptions.
A common pitfall encountered in probability is the odd/even on 2d6. There are some who think that since 7 is the most common number, odds will come up more often. In other words, their subjective probability of the situation is more than 50/50.
If you actually follow either informal proof of this problem, you will find the objective probability is actually 50/50.
Interesting read as it may be, this seems to be a long-winded stream of consciousness in discovering that probability is the abstractive process of averaging averages. There are also some failures in logic, e.g.:
This is a prime example of the dangers involved in a limited sample size, the author should know better in that probability only works reliably at scale. If a large sample of coin-flippers had been tested rather than just one there might be something worthwhile, but as it is, n=1 where n is proven untrustworthy.
The example you cite seems to be written in a way that the average person can understand the idea.
What you are saying isn't wrong, but it is addressed by the author further in the article when they discuss frequentist probability.
My point was that the failure was not of the observers in trusting that the coin was standard, but of the person using disingenuous tactics to subvert people's reasonable expectations. Natural events or any other observable subject matter that is studied for probability do not have to contend with a factor of subversive intervention because the powers at be wish to prove a point.
Why is it a reasonable expectation for coins, though? Probability works for coins because people made it work. That is, governments made it work by manufacturing huge numbers of standard coins, so we all have familiarity with them.
Most things found in nature aren't so neat.
Yes, words are always fuzzy and not precise and based on assumptions and context. Misleading people with inprecise words is not an interesting insight into probabilities.
Of course probabilities depend on knowledge and assumptions. Trying to quantify uncertainties is the whole point of probabilities, and what is uncertain depends on the available knowledge. When different things are uncertain the estimated probabilities are likely different. It's not surprising since the probabilities measure those uncertainties.
I enjoyed the read, and suspect me and the author could argue endlessly over the details and coffee. A few things that jump out to me:
I don't agree. A 50-50 model based on an observed coin with a past performance of behaving as a fair coin isn't subjective. Like someone else said, this is a failure to validate.
Extending this to a weather model, he says that either it rains, or it doesn't. Which is true. But the probability being generated is the past frequency of rain with the same or similar model inputs. The reference is objective, being past model performance. Whether it is accurate is different than it being subjective.
What he doesn't really address is that probability is an emergent property. Whether it is based on subjective vs objective inputs depends on the thing being modeled. We regulate games a chance using objective statistical rules. We set insurance rates using actual tables driven by fairly objective criteria. Then on the other side of the spectrum you have fantasy sports and expert event forecasting, which include lots of subjectivity.
I would recommend the books of Stephen Tetlock for fun reads on the accuracy of subjective forecasts.
As a person who deals with lots of numbers on a daily basis for work, and often times in this very space of risk assessments and quantifying "subjective" probabilities, I found this to be an awesome read.
You should read the books "Fooled by Randomness" and "Antifragile" if you haven't already. They both explore very similar topics.
I was going to mention those! So I'll suggest Against the Gods instead, which is sort of a pop history exploration of mankind's evolution of the concept of probability.
It's an interesting article, but I always find myself wishing for more technical detail with this sort of thing, or at least an easy way to find it (more jargon, for example, to aid in Googling). Without that, I feel like the article is written more to make me feel like I've learned something rather than to actually teach me.
Maybe another example to help?
We assume standard, unloaded, and properly weighted dice. These are the core assumptions that we normally take for granted, but the article argued for also specifying these assumptions.
A common pitfall encountered in probability is the odd/even on 2d6. There are some who think that since 7 is the most common number, odds will come up more often. In other words, their subjective probability of the situation is more than 50/50.
If you actually follow either informal proof of this problem, you will find the objective probability is actually 50/50.
Mirror: https://archive.is/IeP01