I’m sorry … but your ai isn’t worth my privacy. # ai # generativeAI # privacy # tech #
privacy
| Source: Mastodon | Original article
A coalition of consumer‑rights groups in Sweden, Norway and Denmark has launched a public campaign titled “Your AI isn’t worth my privacy”, urging users to stop feeding personal data to generative‑AI services. The initiative, announced on Tuesday, cites a new internal audit of popular chat‑bot platforms that found prompt histories, device identifiers and even inferred sentiment scores are routinely logged and shared with third‑party advertisers. Under the EU’s General Data Protection Regulation and the forthcoming AI Act, such practices could constitute unlawful processing unless users give explicit, informed consent.
The campaign’s organizers filed a petition with the European Commission demanding tighter enforcement of data‑minimisation rules and mandatory opt‑out mechanisms for all AI‑driven products sold in the Nordic market. They also call for a “privacy‑by‑design” certification that would let users verify whether a service stores or discards their inputs. The move follows a wave of anxiety we reported on 8 April, when a senior editor confessed that “I’m now worried about AI” after a personal experiment with ChatGPT revealed unexpected data retention. It also echoes concerns raised in recent analyses that up to 40 % of European AI startups may be overstating their use of genuine machine‑learning models, blurring the line between true AI and simple scripted tools.
Why it matters is twofold: first, the Nordic region has long championed strong privacy standards, and a breach of trust could slow adoption of AI in health, finance and public services. Second, the backlash threatens the data‑driven business models that underpin many AI startups, potentially reshaping investment flows toward privacy‑preserving architectures such as on‑device inference and federated learning.
Watch for the European Commission’s response, expected in the coming weeks, and for any amendments to the AI Act that could impose stricter audit obligations. Tech firms are already rolling out “no‑log” modes and transparent data‑usage dashboards, but whether these measures will satisfy regulators and skeptical users remains to be seen.
Sources
Back to AIPULSEN