OpenAI says its new model GPT-2 is too dangerous to release (2019)
gpt-5 openai open-source
| Source: HN | Original article
OpenAI’s 2019 announcement that its then‑latest language model, GPT‑2, was “too dangerous to release” resurfaced this week as the company unveiled two new open‑source models, GPT‑OSS 120B and GPT‑OSS 20B. The 2019 decision, made when the model reached 1.5 billion parameters, marked a watershed moment for the AI community: OpenAI chose to withhold the full model over fears it could be weaponised for disinformation, phishing and automated propaganda. The move sparked a global debate on the balance between scientific openness and societal risk, prompting governments and industry groups to draft early AI‑safety guidelines.
Why the controversy still matters is clear. GPT‑2 demonstrated that even a “mid‑size” transformer could generate coherent, persuasive text that fooled human readers, foreshadowing the capabilities of today’s larger systems. By keeping the model private, OpenAI set a precedent for responsible disclosure, yet also fueled a black‑market for leaked weights and spurred rival labs to race ahead with less restrained releases. The tension between openness and control has shaped policy discussions ever since, influencing recent EU AI Act drafts and the formation of the Nordic AI Safety Forum.
The release of GPT‑OSS 120B and 20B signals a strategic pivot. Licensed under Apache, the models are the first truly open weights from OpenAI since the GPT‑2 episode, suggesting the company now believes the ecosystem can handle larger, more powerful models responsibly. Observers will watch how the research community adopts the new weights, whether misuse spikes, and how regulators respond to a renewed wave of open‑source AI. The next litmus test will be OpenAI’s handling of GPT‑5, slated for later this year, and whether the lessons of GPT‑2 will translate into concrete safeguards for the next generation of generative models.
Sources
Back to AIPULSEN