Close to the handle of the Big Dipper, there’s a huge ring on the sky that shouldn’t be there. The circular structure — an apparent overdensity of distant galaxies — has a circumference of 4.1 billion light-years. Standard cosmology models cannot easily explain such humongous structures in the mass distribution of the universe. According to PhD student Alexia Lopez (University of Central Lancashire, UK), the discovery “leads to the ultimate question: do we need a new standard model?”
Lopez first presented the result at the 243rd meeting of the American Astronomical Society in New Orleans in early January. Her research paper, coauthored with Roger Clowes (also at University of Central Lancashire) and Gerard Williger (University of Louisville), has now been posted to the arXiv astronomy preprint server (see preprint here).
Two years ago, the same team presented the discovery of another ultra-large-scale structure (uLSS): a giant arc, at a similar distance of 9.2 billion light-years, and more or less in the same part of the sky. “Two extraordinary uLSSs in such close configuration raises the possibility that together they form an even more extraordinary cosmological system,” they write.
Both the Giant Arc and the newly discovered Big Ring show up indirectly, via the absorption lines seen in the spectra of many thousands of distant quasars — active galaxies powered by supermassive black holes. Matter along the line of sight to a quasar absorbs light at specific wavelengths. In particular, the team is looking for the absorption of ionized magnesium atoms (MgII) — both in galaxies and in the gas between them. Due to the expansion of the universe, the wavelength of MgII absorption shifts to the red side of the spectrum (longer wavelengths) when the absorber is farther away – a phenomenon known as redshift.
Mmm, looking over the original paper the one thing I wish they had included was a (log) likelihood test to get an impression of how likely any structure existing that passes such tests may exist....
Mmm, looking over the original paper the one thing I wish they had included was a (log) likelihood test to get an impression of how likely any structure existing that passes such tests may exist. Of course, they point out themselves that that's difficult:
Simulations are often advocated in contemporary astrophysics and cosmology, but we
do not consider them likely to be effective or efficient here. Their complexity would be too
great and would have too many unknowns and uncertainties. Consider, for example, that the
simulations would have to incorporate: simulating the universe in general; the occurrence of
quasars in that simulated universe; the observational parameters of the imaging and spec-
troscopic surveys and their on-sky variations; and the detection of the Mg II by software.
Instead, we have taken the more practical approach of (i) using the data to correct the data,
and (ii) seeking independent corroboration of features using independent tracers.
In 3.4 (page 24, 25/39 pdf page counting) they do use Cuzick and Edward tests which I'm unfortunately unfamiliar with. However:
We found that the clustering pattern in the BR field passes the p-value < 0.05 level in
the second zoom, indicating tentative significant clustering in the field. In four other random
fields there is no significant (p ≤ 0.05) clustering in the fields, suggesting that the clustering
seen in the BR field is unique.
A p-value lower than 0.05 is not particular significant, unless there are some twists to the CE test that I don't know about. I'm also curious how easy they would get past that test with the other, not as-significant clustering they identified earlier just to get a 'taste' of how useful it is in this context.
All in all, the paper is very interesting and I'd like to see it followed up, but I'm also skeptical of some of the statistics. Not that it makes the paper less worthwhile of a research, it's unfortunately very difficult to prove a negative - and in almost all regards the isotropy of the observable universe is very tightly constrained.
I enjoyed reading your thoughts, though I haven't had time to read the paper yet. I would say that I'm not so worried about the p-value threshold if they otherwise do a good job motivating the...
I enjoyed reading your thoughts, though I haven't had time to read the paper yet. I would say that I'm not so worried about the p-value threshold if they otherwise do a good job motivating the validity of the findings. Some journals have even stopped allowing the use of p-value to establish findings without supporting arguments because of how easy it is to so p-value hacking to get significant appearing results.
I do sympathize with their plight on using models with many unknowns. I've done computational modeling of infectious disease, and sometimes your specific problem just doesn't meet the applicability criteria of the usual models in your field.
Yeah, and in a way this is the way the p-value was intended to be used; as a confirmation of what you already expect to be true. Yeah, and the universe being large there are tons of ways to get...
Yeah, and in a way this is the way the p-value was intended to be used; as a confirmation of what you already expect to be true.
I do sympathize with their plight on using models with many unknowns.
From the article:
Mmm, looking over the original paper the one thing I wish they had included was a (log) likelihood test to get an impression of how likely any structure existing that passes such tests may exist. Of course, they point out themselves that that's difficult:
In 3.4 (page 24, 25/39 pdf page counting) they do use Cuzick and Edward tests which I'm unfortunately unfamiliar with. However:
A p-value lower than 0.05 is not particular significant, unless there are some twists to the CE test that I don't know about. I'm also curious how easy they would get past that test with the other, not as-significant clustering they identified earlier just to get a 'taste' of how useful it is in this context.
All in all, the paper is very interesting and I'd like to see it followed up, but I'm also skeptical of some of the statistics. Not that it makes the paper less worthwhile of a research, it's unfortunately very difficult to prove a negative - and in almost all regards the isotropy of the observable universe is very tightly constrained.
I enjoyed reading your thoughts, though I haven't had time to read the paper yet. I would say that I'm not so worried about the p-value threshold if they otherwise do a good job motivating the validity of the findings. Some journals have even stopped allowing the use of p-value to establish findings without supporting arguments because of how easy it is to so p-value hacking to get significant appearing results.
I do sympathize with their plight on using models with many unknowns. I've done computational modeling of infectious disease, and sometimes your specific problem just doesn't meet the applicability criteria of the usual models in your field.
Yeah, and in a way this is the way the p-value was intended to be used; as a confirmation of what you already expect to be true.
Yeah, and the universe being large there are tons of ways to get false positives. The axis of evil in cosmology being a good example. (edit: Well, maybe. We don't know. That's the point.)