Stop trying to write magic incantations for an #llm
Start — Tony Sullivan
open-source
| Source: Mastodon | Original article
Tony Sullivan’s latest post on X, “Stop trying to write magic incantations for an #llm,” is quickly becoming a touchstone for developers wrestling with the hype‑driven approach to large language models. In a terse, three‑paragraph manifesto, Sullivan urges teams to treat LLM‑powered components the same way they treat any open‑source library: start with a clear README, publish contribution guidelines, enforce a style guide, and automate quality checks with linters and tests. He argues that the current “prompt‑as‑spell” mindset—where a cleverly worded prompt is expected to conjure flawless code—ignores the engineering discipline that keeps software reliable at scale.
The shift matters because enterprises are now embedding LLMs in production pipelines for code generation, documentation, and even customer support. Early adopters that rely on ad‑hoc prompts are already reporting brittle outputs, hidden biases, and costly rollbacks. By framing LLM integration as a software engineering problem, Sullivan’s call‑to‑action pushes the industry toward reproducible, auditable practices that can be version‑controlled and peer‑reviewed. It also dovetails with the broader movement to professionalise AI development, echoing recent discussions about “hand‑off” protocols for AI agents and the need for dedicated SDKs that respect resource constraints.
What to watch next: several open‑source projects, such as the LangChain community and the emerging LLM‑Ops toolkits, are rolling out scaffolding that includes README templates, contribution checklists, and automated prompt‑testing suites. Standards bodies are expected to publish the first draft of an “LLM Engineering” style guide later this year. If Sullivan’s prescription gains traction, the next wave of AI‑augmented products will look less like experimental magic tricks and more like rigorously engineered software.
Sources
Back to AIPULSEN