Google releases Gemma 4 open models
gemini gemma google
| Source: HN | Original article
Google unveiled the Gemma 4 family on Tuesday, delivering four open‑source large language models under an Apache 2.0 licence. The lineup comprises an “Effective 2B” (E2B), an “Effective 4B” (E4B), a 26‑billion‑parameter mixture‑of‑experts (MoE) model and a 31‑billion‑parameter dense version. All are built on the same research stack that powers Google’s Gemini 3 system, but the new licence gives developers unrestricted rights to modify, redistribute and commercialise the code.
The release marks the first major refresh of Google’s open‑model programme in a year and a sharp departure from the proprietary “Gemma” licence used for earlier versions. By moving to Apache 2.0, Google signals a willingness to let the broader AI community shape the models, a stance that could accelerate experimentation in agentic‑AI workflows, academic research and start‑up product development. It also narrows the gap between the United States and China in the open‑model arena, where Chinese firms such as DeepSeek and Qwen have already made large, freely available models a cornerstone of their AI strategies.
Gemma 4 arrives as OpenAI wrestles with a wave of lawsuits and a $122 billion infusion earmarked for rapid product rollout, underscoring the intensifying competition for developer mindshare. Google’s decision to open‑source a model of this scale may force rivals to reconsider their own licensing policies and could spur a new wave of community‑driven benchmarking.
What to watch next: early performance comparisons against OpenAI’s GPT‑4o and Anthropic’s Claude 3, adoption rates on Google Cloud’s Vertex AI platform, and any follow‑up announcements about fine‑tuning tools or enterprise‑grade support. The speed with which the open‑source community builds on Gemma 4 will be a key barometer of Google’s influence in the rapidly evolving LLM landscape.
Sources
Back to AIPULSEN