The machines are fine. I'm worried about us.
| Source: Mastodon | Original article
A post on the ergosphere.blog platform, titled “The machines are fine. I’m worried about us,” has sparked a fresh debate about the human side of the AI surge. The author, a senior researcher at the University of Copenhagen’s AI Ethics Lab, argues that the rapid rollout of large‑language models (LLMs) masks a deeper vulnerability: societies are skipping the foundational “first five years” of learning that enable people to navigate the “next twenty” of increasingly sophisticated AI tools.
The piece illustrates the point with a thought experiment involving two fictional students, Alice and Bob. After a year of intensive AI‑assisted study, Alice can dissect a novel research paper and follow its argument, while Bob, who relied on surface‑level prompts, remains unable to critically assess the same material. The author concludes that the machines themselves are not the threat; the threat lies in a generation that may lack the deep analytical skills needed to question, verify, and responsibly deploy AI outputs.
Why the warning matters now is clear. As LLMs move from research labs into everyday workflows—drafting legal contracts, generating scientific summaries, and even shaping public policy—the gap between AI capability and human expertise could widen, increasing the risk of mis‑informed decisions, regulatory capture, and erosion of trust in institutions. The argument aligns with recent concerns raised at the Nordic AI Summit, where policymakers warned that AI literacy must keep pace with model performance.
Looking ahead, the conversation is likely to shift toward concrete measures. The European Commission’s upcoming AI Act revision includes a proposal for mandatory AI‑fundamental‑literacy curricula in secondary schools, and the Nordic Council is set to publish a white paper on “AI‑ready education” later this year. Observers will also watch for pilot programs in Denmark and Sweden that embed critical‑thinking modules into university AI courses, testing whether early‑stage learning can indeed safeguard the next two decades of AI integration.
Sources
Back to AIPULSEN