I found this article interesting as I didn’t know how scientists actually used telescopes to figure out what planets are composed of, and then how that translates into “possible signs of life”. It...
I found this article interesting as I didn’t know how scientists actually used telescopes to figure out what planets are composed of, and then how that translates into “possible signs of life”. It turns out there’s a lot of uncertainty and guess work, since scientists are using an incompletely understood field to make inferences about other unknowns.
Keep in mind that it's not really unknown science. Uncertainty to scientists is not the same thing as uncertainty to lay people. For example, in particle physics, they don't claim anything until...
Keep in mind that it's not really unknown science. Uncertainty to scientists is not the same thing as uncertainty to lay people. For example, in particle physics, they don't claim anything until they are 99.99997% sure that what they are seeing isn't just random chance.
That has two effects in this conversation: first, it means that it can be incredibly difficult to assert that what they are seeing is what's actually there. Second, once you're sure of what you're seeing, we don't know if there are systems that we don't know about that could produce these things.
So it's not exactly guesswork, but it isn't 100% proofwork either.
There's a bit of a difference between particle physics and astrobiology in that there is a good theoretical framework for particle physics that has been experimentally validated many times. So...
There's a bit of a difference between particle physics and astrobiology in that there is a good theoretical framework for particle physics that has been experimentally validated many times. So when we look to expand the standard model through experimentation, we have tools to create statistical controls.
In astrobiology there isn't a good theoretical framework that has been quantitatively validated or backed with experimentation. There are just too many unknowns. A famous example is the Viking labeled release experiments that had been created to detect signs of metabolic activity. The results were positive, but it was concluded that there were other possible causes, and this caused NASA to step back and wonder if they can even design a remote experiment to detect life. This started a mindset that we won't really be able to declare a discovery of life without a returned sample, so we can exhaust alternatives against the real thing.
In this case, they are feeding three sets of unvalidated statistics into a Bayesian model, with its own error, and no foolproof way of measuring the unknown alternatives for each of the inputs.
I remember doing a deep dive on these subjects many years ago in a graduate physics course, including writing a grant proposal against the NASA astrobiology roadmap. When astrobiologists say there is a lot of uncertainty, they don't mean they are just a few well-validated percent off of a statistical measure. They mean their underlying assumptions could be, and likely will be, shown to be completely wrong at some point. Still, they are doing the best they can given the availability of data.
Yes, of course, but observing spectra is a well-understood, exact science, but thinking that detecting alien life is all guesswork will lead laypeople to believe that the whole system is made up...
Yes, of course, but observing spectra is a well-understood, exact science, but thinking that detecting alien life is all guesswork will lead laypeople to believe that the whole system is made up BS. There is a difference between reporting accurately and reporting in a way that accurately conveys ideas to the public. When scientists talk about uncertainty, they have a way of leading the public to think it's all just made up and no one knows what they're talking about, so it's important to respond to misconceptions that arise due to scientists forgetting how niche their area of study really is.
I'll be the first to agree that scientists are often their own worst enemy when it comes to communicating with the public. At the same time there is such a large difference between particle...
I'll be the first to agree that scientists are often their own worst enemy when it comes to communicating with the public.
At the same time there is such a large difference between particle physics and astrobiology that I don't think the comparison helps the lay person. While spectroscopy is very sophisticated, the conjecture around the processes that give rise to the elements we observe are one of the major sources of uncertainty. There are so many proxy variables used, and unknown alternative processes, that the idea of computing a percent probability with any confidence is misleading. Saying that scientists don't view uncertainty the same way as a lay person, and then giving an example threshold of 99.997% seems to give too much strength the other way.
My personal feeling is that overstating certainly is just as, if not more damaging in the long run of public perception. Looking at how the CDC made authoritative claims with middling justifications just exacerbated confidence in them when it came to the next round of recommendations, and the next. It seems prudent not to overstate certainly in results, so that when, not if, we revise our astrobiology results, we don't have a credibility gap with the public.
I found this article interesting as I didn’t know how scientists actually used telescopes to figure out what planets are composed of, and then how that translates into “possible signs of life”. It turns out there’s a lot of uncertainty and guess work, since scientists are using an incompletely understood field to make inferences about other unknowns.
Keep in mind that it's not really unknown science. Uncertainty to scientists is not the same thing as uncertainty to lay people. For example, in particle physics, they don't claim anything until they are 99.99997% sure that what they are seeing isn't just random chance.
That has two effects in this conversation: first, it means that it can be incredibly difficult to assert that what they are seeing is what's actually there. Second, once you're sure of what you're seeing, we don't know if there are systems that we don't know about that could produce these things.
So it's not exactly guesswork, but it isn't 100% proofwork either.
There's a bit of a difference between particle physics and astrobiology in that there is a good theoretical framework for particle physics that has been experimentally validated many times. So when we look to expand the standard model through experimentation, we have tools to create statistical controls.
In astrobiology there isn't a good theoretical framework that has been quantitatively validated or backed with experimentation. There are just too many unknowns. A famous example is the Viking labeled release experiments that had been created to detect signs of metabolic activity. The results were positive, but it was concluded that there were other possible causes, and this caused NASA to step back and wonder if they can even design a remote experiment to detect life. This started a mindset that we won't really be able to declare a discovery of life without a returned sample, so we can exhaust alternatives against the real thing.
In this case, they are feeding three sets of unvalidated statistics into a Bayesian model, with its own error, and no foolproof way of measuring the unknown alternatives for each of the inputs.
I remember doing a deep dive on these subjects many years ago in a graduate physics course, including writing a grant proposal against the NASA astrobiology roadmap. When astrobiologists say there is a lot of uncertainty, they don't mean they are just a few well-validated percent off of a statistical measure. They mean their underlying assumptions could be, and likely will be, shown to be completely wrong at some point. Still, they are doing the best they can given the availability of data.
Yes, of course, but observing spectra is a well-understood, exact science, but thinking that detecting alien life is all guesswork will lead laypeople to believe that the whole system is made up BS. There is a difference between reporting accurately and reporting in a way that accurately conveys ideas to the public. When scientists talk about uncertainty, they have a way of leading the public to think it's all just made up and no one knows what they're talking about, so it's important to respond to misconceptions that arise due to scientists forgetting how niche their area of study really is.
I'll be the first to agree that scientists are often their own worst enemy when it comes to communicating with the public.
At the same time there is such a large difference between particle physics and astrobiology that I don't think the comparison helps the lay person. While spectroscopy is very sophisticated, the conjecture around the processes that give rise to the elements we observe are one of the major sources of uncertainty. There are so many proxy variables used, and unknown alternative processes, that the idea of computing a percent probability with any confidence is misleading. Saying that scientists don't view uncertainty the same way as a lay person, and then giving an example threshold of 99.997% seems to give too much strength the other way.
My personal feeling is that overstating certainly is just as, if not more damaging in the long run of public perception. Looking at how the CDC made authoritative claims with middling justifications just exacerbated confidence in them when it came to the next round of recommendations, and the next. It seems prudent not to overstate certainly in results, so that when, not if, we revise our astrobiology results, we don't have a credibility gap with the public.