From the article: ... And here is commentary by Derek Lowe: Fraud, So Much Fraud ...
From the article:
In 2016, when the U.S. Congress unleashed a flood of new funding for Alzheimer’s disease research, the National Institute on Aging (NIA) tapped veteran brain researcher Eliezer Masliah as a key leader for the effort. He took the helm at the agency’s Division of Neuroscience, whose budget—$2.6 billion in the last fiscal year—dwarfs the rest of NIA combined.
...
But over the past 2 years questions have arisen about some of Masliah’s research. A Science investigation has now found that scores of his lab studies at UCSD and NIA are riddled with apparently falsified Western blots—images used to show the presence of proteins—and micrographs of brain tissue. Numerous images seem to have been inappropriately reused within and across papers, sometimes published years apart in different journals, describing divergent experimental conditions.
After Science brought initial concerns about Masliah’s work to their attention, a neuroscientist and forensic analysts specializing in scientific work who had previously worked with Science produced a 300-page dossier revealing a steady stream of suspect images between 1997 and 2023 in 132 of his published research papers. (Science did not pay them for their work.) “In our opinion, this pattern of anomalous data raises a credible concern for research misconduct and calls into question a remarkably large body of scientific work,” they concluded.
As the article details, this all has some direct drug discovery implications, particularly for an antibody called prasinezumab which targets alpha-synuclein. All four of the fundamental papers about prasinezumab (as cited on the web site of its developer, Prothena) are full of manipulated images, unfortunately. Prothena and Roche reported results in a Parkinson's trial with it in 2022, and the antibody was found to have no benefit at all (it's undergoing another trial now). Given the difficulty of neurodegeneration therapy, it's hard to say if that result is an honest failure of a worthwhile idea, or whether the entire effort has been built on data that simply aren't real. But we cannot rule out the second possibility, at all.
...
There's also a proposed Alzheimer's therapy called cerebrolysin, a peptide mixture derived from porcine brain tissue. An Austrian company (Ever) has run some small inconclusive trials on it in human patients and distributes it to Russia and other countries (it's not approved in the US or the EU). But the eight Masliah papers that make the case for its therapeutic effects are all full of doctored images, too. A third drug, minzasolmin, is supposed to prevent misfolding of alpha-synuclein and Masliah and co-workers published the original papers that make the case for its effects. Papers which have doctored images in them. Masliah co-founded a company (Neuropore) that has been developing the drug, and they have a partnership with Belgian drugmaker UCB. It has to be noted that this one has taken some fire already: a paper last year on its in vivo effects brought some pointed criticism that the drug's short half-life should have made it impossible for it to work under the conditions described. The authors responded that they had other evidence of the drug's mechanism, and I'm glad to hear it, because in addition to the original papers, their previous paper on the drug had images in it credited to Masliah that also seem to have been digitally modified. Minzasolmin is currently in Phase II trials.
I like how someone collated 26years of receipts to absolutely drag this person, deservedly so. Science, is in for a reckoning, hopefully. It is in need for deep reform. We knew long ago that...
I like how someone collated 26years of receipts to absolutely drag this person, deservedly so.
Science, is in for a reckoning, hopefully. It is in need for deep reform. We knew long ago that experiments couldn’t be replicated, even within labs (let alone externally). But we decided to do what we do best, just ignore the facts.
The system creates an environment where like begets like and people’s egos stand in the way of honest reporting. When your whole career depends on producing positive data (and I mean that in the scientific and colloquial way) and it is required that you justify your research on the research of the past, you just end up in an echo chamber. Even if you realize the situation you’re in, I’ve heard of scientists who describe the addiction to falsifying results like using an illicit drug. The first time you do it, you think to yourself, “it’s just this one time. My result was positive but this image doesn’t convey that message, let me just fix it to show what I know is true.” Then when the next experiment doesn’t show your expected results either, but you also don’t want to be caught in a lie, you do it again. Until you’ve built your house of cards and are terrified to topple it.
Some people say that those scientists eventually believe their own lies. So deep in the web of deception they don’t know where the truth is anymore. I dunno, both could be true I suppose.
This is why all research should be published, even if the results show a dead end. Null results are important. A well designed research study that ends up showing no result is still good to know...
This is why all research should be published, even if the results show a dead end. Null results are important. A well designed research study that ends up showing no result is still good to know so others can learn what not to do.
A friend wasted a year doing research that showed no result then later learned another research had done a similar experiment years ago with no result.
I think much ink has been spilled about the “replication crisis” and bad incentives in academia. On the plus side, here you have a company that put his theories to the test (superior to any...
The dossier challenges far more studies than the two cited in NIH’s statement, including many that underpin the development and testing of experimental drugs (see sidebar). Masliah’s work, for example, helped win a nod from the U.S. Food and Drug Administration (FDA) for clinical trials of an antibody called prasinezumab for Parkinson’s. Made by Prothena—a company backed by big money—the drug is intended to attack alpha-synuclein, whose build up in the brain has been linked to the condition’s debilitating physical and cognitive symptoms.
But in a trial of 316 Parkinson’s patients, reported in 2022 in The New England Journal of Medicine, prasinezumab showed no benefit compared with a placebo. And volunteers given infusions of the antibody suffered from far more side effects such as nausea and headaches than those in a placebo group who received sham infusions. Prothena is now collaborating in another trial of the drug candidate involving 586 Parkinson’s patients.
I think much ink has been spilled about the “replication crisis” and bad incentives in academia. On the plus side, here you have a company that put his theories to the test (superior to any replication is to test a hypothesis by other means), got a negative result, and published it in a prestigious journal. So this is science self correcting, in an almost ideal way.
“Almost” because of course you’d prefer to have spared the patients in this trial the false hope. It makes me wonder why the drug company didn’t apply more skepticism to his results before trying to get a clinical trial. Maybe many other companies did and never mentioned it to the public.
Although the pressures are real, I think "no matter what" is a bit of an exaggeration for both business and academia. It's not "no matter what" for most people. But it is for some. There are...
Although the pressures are real, I think "no matter what" is a bit of an exaggeration for both business and academia. It's not "no matter what" for most people. But it is for some. There are crooks out there.
Also, corruption is not new and, as historical trends go, it's unclear that it's worse. People tend to assume things were somehow better at some hazy time in the past, but when I read history, it seems pretty clear that there were many time periods with worse corruption.
I agree that it makes sense to look for systemic issues. Incentives can sometimes be too strong. They are often stupid and promote the wrong behavior. When people have to resist following what the...
I agree that it makes sense to look for systemic issues. Incentives can sometimes be too strong. They are often stupid and promote the wrong behavior. When people have to resist following what the incentives would lead them to do in order to do the right thing despite them, I think that's a sign that there are some incentives that are pulling too much in the wrong direction.
I doubt that no incentives is the answer, though. I think we'd all agree that people should be paid for good work, and there should be some kind of price paid for cheating.
Good work needs to be checked, or there's an incentive to cheat. Unfortunately, that takes time and effort. Trusting people is easier and cheaper, and despite widespread cynicism, we still live in a pretty high-trust society where people trust strangers quite a lot.
Incentives are often a matter of perception rather than reality. For example, how likely do you think it is that you'll be caught? Are cheaters being realistic about their long-term interests? If not, a publicity campaign can help. Stories of people getting caught can change incentives if people learn from them.
From the article:
...
And here is commentary by Derek Lowe:
Fraud, So Much Fraud
...
I like how someone collated 26years of receipts to absolutely drag this person, deservedly so.
Science, is in for a reckoning, hopefully. It is in need for deep reform. We knew long ago that experiments couldn’t be replicated, even within labs (let alone externally). But we decided to do what we do best, just ignore the facts.
The system creates an environment where like begets like and people’s egos stand in the way of honest reporting. When your whole career depends on producing positive data (and I mean that in the scientific and colloquial way) and it is required that you justify your research on the research of the past, you just end up in an echo chamber. Even if you realize the situation you’re in, I’ve heard of scientists who describe the addiction to falsifying results like using an illicit drug. The first time you do it, you think to yourself, “it’s just this one time. My result was positive but this image doesn’t convey that message, let me just fix it to show what I know is true.” Then when the next experiment doesn’t show your expected results either, but you also don’t want to be caught in a lie, you do it again. Until you’ve built your house of cards and are terrified to topple it.
Some people say that those scientists eventually believe their own lies. So deep in the web of deception they don’t know where the truth is anymore. I dunno, both could be true I suppose.
This is why all research should be published, even if the results show a dead end. Null results are important. A well designed research study that ends up showing no result is still good to know so others can learn what not to do.
A friend wasted a year doing research that showed no result then later learned another research had done a similar experiment years ago with no result.
I think much ink has been spilled about the “replication crisis” and bad incentives in academia. On the plus side, here you have a company that put his theories to the test (superior to any replication is to test a hypothesis by other means), got a negative result, and published it in a prestigious journal. So this is science self correcting, in an almost ideal way.
“Almost” because of course you’d prefer to have spared the patients in this trial the false hope. It makes me wonder why the drug company didn’t apply more skepticism to his results before trying to get a clinical trial. Maybe many other companies did and never mentioned it to the public.
Although the pressures are real, I think "no matter what" is a bit of an exaggeration for both business and academia. It's not "no matter what" for most people. But it is for some. There are crooks out there.
Also, corruption is not new and, as historical trends go, it's unclear that it's worse. People tend to assume things were somehow better at some hazy time in the past, but when I read history, it seems pretty clear that there were many time periods with worse corruption.
I agree that it makes sense to look for systemic issues. Incentives can sometimes be too strong. They are often stupid and promote the wrong behavior. When people have to resist following what the incentives would lead them to do in order to do the right thing despite them, I think that's a sign that there are some incentives that are pulling too much in the wrong direction.
I doubt that no incentives is the answer, though. I think we'd all agree that people should be paid for good work, and there should be some kind of price paid for cheating.
Good work needs to be checked, or there's an incentive to cheat. Unfortunately, that takes time and effort. Trusting people is easier and cheaper, and despite widespread cynicism, we still live in a pretty high-trust society where people trust strangers quite a lot.
Incentives are often a matter of perception rather than reality. For example, how likely do you think it is that you'll be caught? Are cheaters being realistic about their long-term interests? If not, a publicity campaign can help. Stories of people getting caught can change incentives if people learn from them.