AI is already helping police the literature
Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.
Natural language processing tools flag “tortured phrases” – the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction. AI, especially agentic, reasoning-capable models increasingly proficient in mathematics and logic, will soon uncover more subtle flaws.
For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.
Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors.
We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it’s much discussed that a good deal of published work is never or very rarely cited. To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments.
What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable. As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end.
Reframing the scientific ideal
Safeguarding public trust requires redefining the scientist’s role in more transparent, realistic terms. Much of today’s research is incremental, career‑sustaining work rooted in education, mentorship and public engagement.
If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.
A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.
A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science. Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings – or better still, takes the lead – the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself.
Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.
Alexander Kaurov, PhD Candidate in Science and Society, Te Herenga Waka — Victoria University of Wellington and Naomi Oreskes, Professor of the History of Science, Harvard University
This article is republished from The Conversation under a Creative Commons license. Read the original article.














![Asla – Watan Sahi [Official MV] Latest Punjabi Song – K Million Music Asla – Watan Sahi [Official MV] Latest Punjabi Song – K Million Music](https://i.ytimg.com/vi/sCuLojys0n4/maxresdefault.jpg)











