gilest.org: AI and the human voice
voice
| Source: Mastodon | Original article
A post on gilest.org has ignited fresh debate over the limits of large‑language models, arguing that today’s AI‑generated prose is “rubbish” because it lacks a genuine human voice. The author, known on X as @gilest, points out that most output feels “dull, derivative and indistinguishable from a thousand other texts,” a criticism that resonated widely after the piece was retweeted by several AI‑ethics commentators.
The observation matters because it surfaces a tension that has been building since the rollout of conversational agents capable of producing fluent copy at scale. While tools such as ChatGPT, Claude and Gemini have transformed newsroom workflows, they also risk homogenising style and eroding the subtle cues—tone, rhythm, cultural reference—that signal authorship. Earlier this month we reported on how voice‑based agents struggle with hallucinations, and on the rise of AI‑generated overviews that spread misinformation at unprecedented levels. Gilest’s critique adds a cultural dimension: if the “human element” disappears, the very credibility of AI‑assisted content could be called into question, especially in sectors that rely on trust, such as journalism, education and public policy.
What to watch next is whether developers respond with models that explicitly encode stylistic diversity or with tools that foreground human editing. OpenAI’s recent “Custom Voice” beta and Anthropic’s “persona‑driven” prompting experiments suggest a move toward more personalized output, but they remain in early testing. Meanwhile, publishers are experimenting with hybrid pipelines that combine AI drafting with mandatory human review, a practice that could become industry standard if the backlash against bland, generic text grows louder. The gilest.org post may thus be a catalyst for a shift from “AI‑first” to “human‑first” content strategies across the Nordic tech landscape.
Sources
Back to AIPULSEN