The Glaze Project (including Glaze, Nightshade, WebGlaze and others) is a research effort that devel
| Source: Mastodon | Original article
The University of Chicago’s Glaze Project announced a major upgrade to its suite of anti‑scraping tools on Tuesday, rolling out Glaze 2.0, Nightshade 1.5 and a public beta of WebGlaze. The three components work together to make artworks invisible to generative‑AI models while remaining unchanged to human eyes. Glaze 2.0 refines the original algorithm that computes the smallest pixel‑level perturbations needed to “confuse” a model’s feature extractor; Nightshade 1.5 adds a new “poison‑image” mode that deliberately skews an AI’s internal representation, turning a fruit bowl into a kaleidoscope of nail‑polish bottles for the model. WebGlaze provides a browser‑based interface, letting artists apply the protection without a high‑end GPU.
The release comes as the art‑community backlash against non‑consensual AI training intensifies. High‑profile lawsuits against Stability AI and Midjourney have highlighted the legal gray area surrounding data scraping, and many creators fear that once an image is indexed, it can be reused indefinitely. By embedding a defensive layer at the source, the Glaze Project aims to shift the power balance back to individual artists and force AI developers to seek explicit licenses. The team also disclosed that a June‑2025 security paper from Zurich researchers exposed a method to reverse‑engineer the original Glaze, prompting the current hardening effort.
What to watch next are three fronts. First, adoption rates among visual‑arts collectives will reveal whether the tools can scale beyond early‑adopter labs. Second, reactions from major AI providers—particularly OpenAI, which recently rolled out “GPT Rosaline” for life‑science research—could shape future licensing negotiations. Finally, regulators in the EU and the United States are drafting AI‑training transparency rules; the Glaze Project’s open‑source approach may become a benchmark for compliance. If the upgrades hold up against emerging attacks, they could become the de‑facto standard for protecting creative work in the age of generative AI.
Sources
Back to AIPULSEN