From the article: … (An example of goal displacement would be when increasing citation counts becomes an end in itself, instead of an imperfect measurement of influence.) That is, scientists are...
From the article:
[…] Derek de Solla Price, a physicist turned historian of science at Yale, published studies demonstrating that scientific literature had been growing exponentially since the seventeenth century—a finding that raised urgent questions about how anyone could keep up, and how institutions could identify what mattered. Thomas Kuhn’s Structure of Scientific Revolutions reframed the history of science around paradigms and the communities that held them, making the social organization of science central to its epistemology. And Fritz Machlup, an economist at Princeton, began quantifying what he called “the knowledge industry,” treating the production and distribution of knowledge as an economic sector susceptible to the same analysis as manufacturing or agriculture. Together, these works made science legible as a system—and legibility, as James Scott has argued, is the precondition for management.
This new legibility of science created a problem: how do you manage a system that produces more literature than anyone can read. One answer, developed through the 1960s and institutionalized in the 1970s, was citation metrics. Eugene Garfield’s Institute for Scientific Information built tools to track who cited whom, which journals mattered, which papers had influence. The intention was to solve an information overload problem—to help researchers find the important work in a flood of publication. This was a reasonable response to a real problem. But solutions curdle. The tools built to navigate the literature became tools to evaluate the people who produced it. Citation counts migrated from library science into hiring and promotion decisions. What had been an instrument for managing information became an instrument for managing careers.
…
Robert Merton saw this coming, though he couldn’t stop it. In 1940, long before citation indices existed, Merton had theorized the phenomenon he called “goal displacement”—the process by which instrumental values become terminal values, means transmuted into ends.
(An example of goal displacement would be when increasing citation counts becomes an end in itself, instead of an imperfect measurement of influence.)
Into this context—a scientific system already optimized for measurable output, already decades into goal displacement, already reshaping research priorities around metrics rather than problems—arrive large language models.
They did not arrive as disruptors. They arrived as intensifiers. LLMs function as an accelerant for the existing optimization machine, making the logic run faster rather than challenging its foundations. The technology can help write more papers, synthesize more literature reviews, produce more of the shapes that hiring committees evaluate in their twelve minutes with a file. It needn’t have been this way, or at least one can imagine it being otherwise. In a different institutional context, LLMs might be enrolled as tools for synthesis, for identifying gaps in literatures, for connecting disparate fields. Some of this happens, in local pockets, where researchers use them as tools for exploration and connection rather than production. But the dominant pattern is intensification. The technology is shaped by the logic already in place, and it makes that logic run faster.
That is, scientists are using LLM’s to churn out papers faster.
The logic didn’t stay contained in the academy. When Larry Page and Sergey Brin developed PageRank in the late 1990s, they drew explicitly on citation analysis. Their foundational paper cites Garfield alongside Pinski and Narin, whose influence-weighting method provided the recursive structure for the algorithm. Garfield’s solution to the problem of scientific information overload became Google’s solution to the problem of internet information overload, and it was gamed in the same ways. Search engine optimization is goal displacement with tighter feedback loops: the tools built to identify what mattered became tools to manufacture the appearance of mattering, and the manufacturing reshaped what got produced. The pattern Merton had diagnosed in bureaucracies, and worried about in science, became the organizing logic of the web.
And from there to social media where influencers ask people to “like and subscribe” to increase their metrics. People like to blame “the algorithm.” Apparently citation counts are an early form of that?
LLMs didn’t create the dysfunction in scientific publishing; they inherited it, intensified it, made it run faster. Like a normally benign pathogen wreaking havoc in an immunocompromised patient, they point to the problem, but imagining them as the totality of the problem would be a deadly mistake.
They do the same for the web […]
They do the same for the web, which had been restructured by the same logic once PageRank exported Garfield’s citation analysis to organize the internet—and they generate paper-mill product and SEO content with equal facility because both are downstream of the same optimization, and their users are targeting isomorphic systems. One might hope that this acceleration heightens the contradictions, that the systems produce so much slop so quickly that the problem finally becomes undeniable. But, as we should all know by now, systems can persist in dysfunction indefinitely, and absurdity is not self-correcting. Whether the acceleration produces collapse or adaptation or simply more of the same is not a question about the technology, and it won’t be answered by debates about capabilities. It will be answered by the institutions that have been running this program for sixty years. Not, probably, by those who presently hold power within them—but by those who can build countervailing power, and who decide to change what gets measured, or finally wrench the institution of science itself from the false promise of measurement.
That is, the dysfunction will continue as long as people are rewarded for making numbers go up.
From the article:
…
(An example of goal displacement would be when increasing citation counts becomes an end in itself, instead of an imperfect measurement of influence.)
That is, scientists are using LLM’s to churn out papers faster.
And from there to social media where influencers ask people to “like and subscribe” to increase their metrics. People like to blame “the algorithm.” Apparently citation counts are an early form of that?
That is, the dysfunction will continue as long as people are rewarded for making numbers go up.