Google's Gemma 4 model goes fully open-source and unlocks powerful local AI - even on phones
deepmind gemma google open-source
| Source: ZDNET on MSN | Original article
Google’s DeepMind division has released Gemma 4 as a fully open‑source model under the Apache 2.0 licence, extending the Gemma family beyond the research‑preview versions that sparked interest earlier this month. The new release adds offline, multimodal capabilities that run on everything from cloud servers to smartphones and Raspberry Pi boards, giving developers total control over edge and on‑premises deployments.
Gemma 4’s architecture blends a sliding‑window local attention with a final global attention layer, a hybrid design that preserves low memory use while handling long‑context tasks. Google stresses that the model undergoes the same infrastructure‑security protocols as its proprietary offerings, positioning it as a trusted foundation for enterprises and sovereign organisations that need transparent, auditable AI.
The move matters because it lowers the barrier to high‑performance AI on devices that cannot rely on constant internet connectivity. Nordic startups and public‑sector projects can now embed sophisticated language understanding without sending data to external clouds, a boon for privacy‑focused regulations such as GDPR and for cost‑sensitive deployments in remote areas. The open licence also invites community‑driven optimisation for local languages, a step that could accelerate Finnish, Swedish and Norwegian language services.
As we reported on 8 April, a community‑built Gemma‑4 variant already circulated on platforms like SillyTavern, but the official open‑source launch provides a vetted, production‑ready baseline. The next weeks will reveal how quickly the model is adopted in the Nordic AI ecosystem, whether benchmark scores match Google’s claims, and how competitors such as Meta’s Llama 3 and Apple’s on‑device models respond. Watch for announcements of fine‑tuned Gemma 4 versions targeting Scandinavian languages and for integration into upcoming edge‑AI hardware from regional chip makers.
Sources
Back to AIPULSEN