Martin Varsavsky (@martinvars) on X
| Source: Mastodon | Original article
Martin Varsavsky, the serial entrepreneur behind Jazztel and several AI‑focused ventures, took to X on Thursday to argue that large language models (LLMs) could soon move beyond routine automation and become genuine engines of scientific discovery. In a terse Korean‑English tweet, he wrote that if a model can “reconstruct a paradigm shift from pre‑discovery data,” it would be capable of generating new hypotheses rather than merely recognizing existing patterns. The post, linked to a longer thread, cites recent experiments where LLMs have suggested viable molecular structures and identified overlooked correlations in climate datasets.
The claim taps a growing chorus of researchers who see generative AI as a partner in hypothesis formation. Earlier this year, DeepMind’s AlphaFold proved that AI can predict protein folding with unprecedented accuracy, while tools such as IBM’s RoboRXN and Meta’s “Science‑LLM” have begun drafting experimental designs. Varsavsky’s emphasis on “new hypothesis generation” signals a shift from using LLMs as data‑retrieval assistants to treating them as creative collaborators that can propose testable theories from raw, unlabelled archives.
Why it matters is twofold. First, the ability to extrapolate from pre‑discovery data could accelerate breakthroughs in fields where experimental cycles are costly, from drug development to renewable energy. Second, it raises questions about attribution, validation and the role of human expertise when AI proposes the next scientific conjecture. Academic institutions are already drafting policies for AI‑generated hypotheses, and funding agencies are earmarking grants for “AI‑augmented discovery” projects.
What to watch next are the concrete pilots that will put Varsavsky’s vision to the test. OpenAI, Google DeepMind and emerging European labs have announced collaborations with universities to embed LLMs in laboratory workflows. The first peer‑reviewed papers citing AI‑originated hypotheses are expected by late 2026, and their reception will likely shape regulatory and ethical frameworks for AI‑driven science.
Sources
Back to AIPULSEN