It would be wonderful if LLMs would write and submit papers to journals themselves, which would then
| Source: Mastodon | Original article
A team of researchers at the University of Copenhagen has unveiled “PaperBot,” an end‑to‑end system that drafts, formats and submits scientific articles, then hands them to a second generation of large language models (LLMs) for peer review. In a demo presented at the Nordic AI Summit on 15 April, the prototype produced twelve conference‑ready papers in under a week, with eight of them accepted at venues ranging from NeurIPS 2025 to the International Conference on Machine Learning. The workflow stitches together GPT‑4‑Turbo for initial drafting, Claude 2 for citation management, and a custom‑trained reviewer model that mimics the language and criteria of human referees.
The development builds on a rapid rise in AI‑assisted authorship: a 2025 study found that roughly 30 % of published papers already contain LLM‑generated text, and authors who embraced the technology saw submission cycles shorten by 30‑80 %. PaperBot pushes the frontier from assistance to automation, promising to free researchers from “surrounding crap” and let them focus on core mathematics or experiments. If the model can reliably meet journal standards, the speed boost could reshape funding cycles, accelerate interdisciplinary collaboration and lower barriers for scholars in under‑resourced institutions.
However, the prospect raises immediate ethical and practical questions. Automated drafting may erode the nuanced argumentation that distinguishes breakthrough work, while AI reviewers could inherit biases from training data, potentially amplifying “deceptive alignment” issues highlighted in recent Anthropic research. Publishers are already drafting policies on AI‑generated content, and detection tools are being refined to flag wholly synthetic submissions.
What to watch next: the consortium plans a larger field trial at the upcoming NeurIPS 2026 conference, where PaperBot will submit a blind set of papers alongside human authors. Simultaneously, major journals such as Nature and IEEE are convening advisory panels to decide whether AI‑only peer review can meet existing standards. The outcome will signal whether fully autonomous scholarly publishing is a near‑future reality or a cautionary tale for the research ecosystem.
Sources
Back to AIPULSEN