I took the prompt generated by my # Moodle AIText question type and pasted it into the Edge Gall
gemma google
| Source: Mastodon | Original article
A Moodle instructor has taken the prompt generated by the platform’s new **AIText** question type and run it on a Google Pixel 7 equipped with GrapheneOS, using the Edge Gallery app to invoke the **GoogleGemma‑4‑E2B‑it** model entirely offline. After copying the prompt into Edge Gallery, the user disabled all network connections, forcing the phone’s on‑device inference engine to produce the answer without reaching external servers.
The experiment proves that Moodle’s AI‑driven assessment tools can be decoupled from cloud APIs and executed locally on consumer hardware. By leveraging a privacy‑focused OS and an on‑device LLM, educators can offer AI‑assisted feedback while guaranteeing that student data never leaves the device. This addresses long‑standing concerns about data sovereignty, GDPR compliance, and the risk of exposing exam content to third‑party services. It also sidesteps the latency and cost issues that have hampered large‑scale adoption of cloud‑only LLMs in schools.
As we reported on 14 April, the **Anthropic Opus** model was already being trialled to re‑imagine Moodle’s gradebook, highlighting a broader push to embed generative AI deeper into the learning management system. The current offline test extends that trajectory, showing that the same prompt‑generation logic can feed a variety of models, from hosted APIs to edge‑optimized variants, without redesigning the Moodle plugin.
What to watch next: benchmark results comparing Gemma‑4’s accuracy and speed against cloud‑based counterparts; updates from the Edge Gallery team on model support and battery impact; and Moodle’s roadmap for native offline‑LLM integration. If the approach scales, we may see a new class of “privacy‑first” AI tools in classrooms across the Nordics, prompting policy makers to revisit guidelines on AI use in education.
Sources
Back to AIPULSEN