The Ethics of Manipulation (Stanford Encyclopedia of Philosophy)
ethics
| Source: Mastodon | Original article
Robert Noggle, a senior lecturer in philosophy at the University of Edinburgh, has updated the Stanford Encyclopedia of Philosophy’s entry “The Ethics of Manipulation.” The revision, posted on the SEP’s open‑access platform, expands the discussion of manipulation beyond classic political and commercial contexts to include emerging concerns about artificial‑intelligence systems that nudge, persuade or otherwise shape human decisions without transparent consent.
The update matters because the SEP is a go‑to reference for scholars, policymakers and technologists seeking rigorous definitions of ethical concepts. By foregrounding AI‑driven influence—often described in the media as “sycophantic” or “coercive”—the entry supplies a shared vocabulary for debates over algorithmic persuasion, recommender‑system design, and the line between benign personalization and manipulative exploitation. The timing is striking: just weeks after Encyclopedia Britannica and Merriam‑Webster sued OpenAI for alleged copyright infringement, regulators have begun probing whether large language models can be weaponised to steer public opinion or consumer behaviour. Noggle’s expanded treatment of autonomy, coercion and free will therefore offers a philosophical scaffold for forthcoming legislation and corporate governance frameworks.
What to watch next is the ripple effect across the AI ethics ecosystem. Academic conferences on moral psychology and AI alignment are likely to cite the revised entry, while think‑tanks may incorporate its distinctions into policy briefs on “transparent AI” and “informed consent” for digital interactions. Legal scholars could also lean on the SEP’s definitions when arguing that manipulative AI practices constitute unfair trade or consumer‑protection violations. As the conversation moves from abstract theory to concrete regulation, the updated entry will serve as a reference point for anyone grappling with the moral limits of machine‑mediated influence.
Sources
Back to AIPULSEN