A haunting technical breakdown of why we're building a world powered by "bulls*it machines&
| Source: Mastodon | Original article
Kyle Kingsbury, the software‑engineer‑turned‑AI‑skeptic behind the aphyr.com blog, has released a stark new essay titled *The Future of Everything Is Lies, I Guess*. The 45‑page PDF, posted on 18 April, dissects how the industry’s obsession with ever‑larger language models and “no‑code” AI builders has produced what Kingsbury calls “bulls*it machines” – systems that appear intelligent but are fundamentally driven by over‑fitted benchmarks, noisy data pipelines and opaque optimisation tricks. He coins the term “slop” for the low‑quality, uncurated data that now fuels most commercial AI services, warning that when slop dominates, reliability collapses and the technology’s promised benefits evaporate.
The analysis matters because it challenges the prevailing narrative that scaling model size alone guarantees progress. Kingsbury points to concrete failures in recent benchmark suites – such as the MemPalace “LongMemEval” test, where scores fell from 100 % to 96.6 % after a targeted fix exposed over‑fitting – and argues that similar weaknesses lurk across the AI stack, from data collection to deployment. For Nordic AI startups that rely heavily on third‑party APIs and low‑code platforms, the essay raises immediate questions about product robustness, liability and the long‑term viability of a market built on shaky foundations.
What to watch next are the reactions from the major AI labs and the European Commission’s upcoming AI‑risk regulations. If Kingsbury’s critique gains traction, we may see a push for stricter benchmark auditing, transparent data provenance and a revival of “small‑model” research that prioritises interpretability over raw scale. The Nordic AI community is already debating whether to double‑down on open‑source alternatives or to lobby for clearer industry standards – a debate that could reshape the region’s AI landscape in the months ahead.
Sources
Back to AIPULSEN