8 votes

COVID-19 antibody seroprevalence in Santa Clara County, California

10 comments

  1. skybrian
    (edited )
    Link
    From another critique by Andrew Gelman: [...]

    From another critique by Andrew Gelman:

    If the specificity is 90%, we’re sunk. With a 90% specificity, you’d expect to see 333 positive tests out of 3330, even if nobody had the antibodies at all. Indeed, they only saw 50 positives, that is, 1.5%, so we can be pretty sure that the specificity is at least 98.5%. If the specificity were 98.5%, the observed data would be consistent with zero, which is one of Rushton’s points above. On the other hand, if the specificity were 100%, then we could take the result at face value.

    So how do they get their estimates? Again, the key number here is the specificity. Here’s exactly what they say regarding specificity:

    A sample of 30 pre-COVID samples from hip surgery patients were also tested, and all 30 were negative. . . . The manufacturer’s test characteristics relied on . . . pre-COVID sera for negative gold standard . . . Among 371 pre-COVID samples, 369 were negative.

    This gives two estimates of specificity: 30/30 = 100% and 369/371 = 99.46%. Or you can combine them together to get 399/401 = 99.50%. If you really trust these numbers, you’re cool: with y=399 and n=401, we can do the standard Agresti-Coull 95% interval based on y+2 and n+4, which comes to [98.0%, 100%]. If you go to the lower bound of that interval, you start to get in trouble: remember that if the specificity is less than 98.5%, you’ll expect to see more than 1.5% positive tests in the data no matter what!

    [...]

    I think the authors of the above-linked paper owe us all an apology. We wasted time and effort discussing this paper whose main selling point was some numbers that were essentially the product of a statistical error.

    I’m serious about the apology. Everyone makes mistakes. I don’t think they authors need to apologize just because they screwed up. I think they need to apologize because these were avoidable screw-ups. They’re the kind of screw-ups that happen if you want to leap out with an exciting finding and you don’t look too carefully at what you might have done wrong.

    6 votes
  2. [5]
    skybrian
    (edited )
    Link
    From the abstract: From the discussion: Here's the CNBC article about it. Here's an article in Nature.

    From the abstract:

    On 4/3-4/4, 2020, we tested [Santa Clara county] residents for antibodies to SARS-CoV-2 using a lateral flow immunoassay. Participants were recruited using Facebook ads targeting a representative sample of the county by demographic and geographic characteristics. We report the prevalence of antibodies to SARS-CoV-2 in a sample of 3,330 people, adjusting for zip code, sex, and race/ethnicity. We also adjust for test performance characteristics using 3 different estimates: (i) the test manufacturer's data, (ii) a sample of 37 positive and 30 negative controls tested at Stanford, and (iii) a combination of both.

    Results The unadjusted prevalence of antibodies to SARS-CoV-2 in Santa Clara County was 1.5% (exact binomial 95CI 1.11-1.97%), and the population-weighted prevalence was 2.81% (95CI 2.24-3.37%). Under the three scenarios for test performance characteristics, the population prevalence of COVID-19 in Santa Clara ranged from 2.49% (95CI 1.80-3.17%) to 4.16% (2.58-5.70%).

    These prevalence estimates represent a range between 48,000 and 81,000 people infected in Santa Clara County by early April, 50-85-fold more than the number of confirmed cases.

    Conclusions The population prevalence of SARS-CoV-2 antibodies in Santa Clara County implies that the infection is much more widespread than indicated by the number of confirmed cases. Population prevalence estimates can now be used to calibrate epidemic and mortality projections.

    From the discussion:

    We consider our estimate to represent the best available current evidence, but recognize that new information, especially about the test kit performance, could result in updated estimates. For example, if new estimates indicate test specificity to be less than 97.9%, our SARS-CoV-2 prevalence estimate would change from 2.8% to less than 1%, and the lower uncertainty bound of our estimate would include zero. On the other hand, lower sensitivity, which has been raised as a concern with point-of-care test kits, would imply that the population prevalence would be even higher. New information on test kit performance and population should be incorporated as more testing is done and we plan to revise our estimates accordingly.

    Here's the CNBC article about it. Here's an article in Nature.

    4 votes
    1. skybrian
      Link Parent
      Here is a blog post with a more detailed critique: [...] [...] [...] [...]

      Here is a blog post with a more detailed critique:

      1 The False Positive Rate of the Test is High

      [...]

      Naively, getting 2 false positives out of 401 total and extrapolating to the sample population of 3330 would mean 2/401 * 3330 = 16.6 false positives out of the 50 total reported positives. That would mean fully 1/3 of the reported positives could be false positives.

      [...]

      TLDR: because the authors reported 2 false positives out of 401 tested samples, there is a really wide confidence interval on what our actual false positive rate could be, and it could be significantly higher than 1.2%. This could account for many if not all of the 50 reported positives in their study. This is one possible failure mode.

      [...]

      2 Were Participants Enriched for COVID-19 Cases?

      2A. Exposed people may have signed up for the study to get tested [...]

      2B. Exposed people may have recruited other exposed people for the study

      Recall that recruitment of study participants was done through Facebook. People who thought they had symptoms or exposure could be sharing links to the study in private groups, WhatsApp chats, email threads, and the like. If one of those groups was for people who had COVID-19 symptoms or exposure, then it’s game over: you could get a “super-recruiter” event where one person recruits N other enriched people into the study. That could significantly boost the number of positives beyond what you’d see in a random sample of Santa Clara.

      [...]

      3 The Study Would Imply Faster Spread than Past Pandemics

      In order to generate thousands of excess deaths in a few weeks with a very low infection fatality rate of 0.12–2% as claimed in the paper [see above], the virus would have to be wildly contagious. It would mean all the deaths are coming in the last few weeks as the virus goes vertical, churns out millions of cases per week to get thousands of deaths, and then suddenly disappears as it runs out of bodies.

      7 votes
    2. skybrian
      Link Parent
      One way to think about this is that the fatality rate appears very low in the bay area. There have been 70 deaths attributed to Coronavirus in Santa Clara county, and there are 188 patients in the...

      One way to think about this is that the fatality rate appears very low in the bay area. There have been 70 deaths attributed to Coronavirus in Santa Clara county, and there are 188 patients in the hospital (source), so doing a back-of-the envelope calculation that's a fatality rate well below 1%, even if you assume some hospitalized patients will eventually die.

      But the prevalence is also not nearly high enough for "herd immunity", if you were wondering about that.

      4 votes
    3. skybrian
      Link Parent
      There is some criticism on Reddit. The question is: what's the false positive rate? For a realistic guess, it could account for over half the test-positives in the study. Also, non-random study.

      There is some criticism on Reddit. The question is: what's the false positive rate? For a realistic guess, it could account for over half the test-positives in the study.

      Also, non-random study.

      4 votes
    4. DanBC
      Link Parent
      I think the problem with popular reporting is that people don't know what "specific" or "selective" mean for testing, and they can't then apply that lack of knowledge to a paper.

      I think the problem with popular reporting is that people don't know what "specific" or "selective" mean for testing, and they can't then apply that lack of knowledge to a paper.

      2 votes
  3. [2]
    skybrian
    Link
    From Twitter: "In the Stanford study, a person is classified as positive "by either IgG or IgM." But then they only included the IgG false positive rate, and failed to include the IgM one. Given...

    From Twitter: "In the Stanford study, a person is classified as positive "by either IgG or IgM." But then they only included the IgG false positive rate, and failed to include the IgM one. Given that they treat either test as positive, including both doubles the false positive rate."

    3 votes
    1. vektor
      Link Parent
      Nevermind that my favourite source (Christian Drosten) indicates that IgM tests are often positive after a similar infection. So a recent flu or cold could cause a IgM test to be positive for...

      Nevermind that my favourite source (Christian Drosten) indicates that IgM tests are often positive after a similar infection. So a recent flu or cold could cause a IgM test to be positive for Covid19. So it seems the false positive rate for that would be a lot higher.

      All in all, I don't think it looks good for this study.

      3 votes
  4. skybrian
    Link
    From an article updated three days ago: Feud over Stanford coronavirus study: ‘The authors owe us all an apology’: [...] [...] [...]

    From an article updated three days ago: Feud over Stanford coronavirus study: ‘The authors owe us all an apology’:

    In response, on Sunday, the Stanford study’s authors said they are planning to soon release a detailed appendix that addresses many of the “constructive comments and suggestions” the team has received.

    [...]

    Others accused the authors of having agendas before going into the study. Back in March, Bhattacharya and Bendavid wrote an editorial in the Wall Street Journal arguing that a universal quarantine may not be worth the costs. Their colleague John Ioannidis has written that we lack the data to make such drastic economic sacrifices.

    [...]

    Addressing the critics, Stanford’s Ioannidis, professor of medicine and biomedical data science at Stanford University, promised an expanded version of their study will be posted soon. “The results remain very robust,” he said.

    [...]

    In the end, no single study is going to answer the question of how prevalent COVID-19 is in our communities, scientists said. More studies with different technologies and analytic approaches are needed.

    That’s coming. A UC Berkeley project, which will begin in May, will test a large and representative swath of 5,000 East Bay residents. Scientists will take saliva, swab and blood samples from volunteers between the ages of 18 and 60 around the region.

    Starting Monday, UC San Francisco and a privately-funded operation will test all 1,680 residents of rural Bolinas for evidence of the virus. UCSF will launch a similar effort Saturday in San Francisco’s densely populated and largely Latino Mission District, where it hopes to test 5,700 people.

    Results are expected soon from seroprevalence surveys run by other groups around the world, including teams in China, Australia, Iceland, Italy and Germany

    2 votes
  5. skybrian
    Link
    A Stanford Professor’s Wife Recruited People For His Coronavirus Study By Claiming It Would Reveal If They Could “Return To Work Without Fear”

    A Stanford Professor’s Wife Recruited People For His Coronavirus Study By Claiming It Would Reveal If They Could “Return To Work Without Fear”

    Asked for comment, Bhattacharya said that his wife’s email was not associated with the research team’s work. “The email you reference was sent out without my permission or my knowledge or the permission of the research team,” he wrote in an email to BuzzFeed News. He said that he believes the note was also shared on social media sites.

    Bhattacharya acknowledged that the email skewed the makeup of the study’s participants, but argued that the researchers corrected for the difference in volunteers. “Our tracking of signups very strongly suggests that this email attracted many people from the wealthier and healthier parts of Santa Clara County to seek to volunteer for the study,” he wrote. “In real time, we took immediate steps to slow the recruitment from these areas and open up recruitment from all around Santa Clara County.”

    He added that a revised preprint would be released in the next two days and would address criticisms of the study’s sample selection.

    1 vote