AI energy crisis deepens: New breakthrough cuts power use by 100 times
training
| Source: Asianet Newsable on MSN | Original article
A team of researchers from the University of Cambridge and the AI startup Synapse Labs announced a neuro‑symbolic model that slashes energy consumption by a factor of 100 compared with conventional large‑language‑area (VLA) systems. The hybrid architecture combines a lightweight symbolic reasoning layer with a compact neural core, allowing it to be trained on a single GPU using just 1 % of the power typically required for a comparable model. During inference the system draws only 5 % of the energy that standard transformer‑based models need, while delivering a modest boost in accuracy on benchmark tasks such as commonsense reasoning and factual retrieval.
The breakthrough arrives at a moment when AI’s soaring demand for compute is straining global electricity grids. The International Energy Agency estimates that data centres and AI workloads together consumed roughly 415 TWh in 2024, a share that is growing faster than any other sector of the digital economy. Reducing the power envelope of training and serving can lower operational costs for cloud providers, curb the carbon footprint of AI services, and make it feasible to deploy sophisticated models at the edge, where power is scarce.
Industry observers will be watching whether the model can be scaled to the size of today’s flagship LLMs without sacrificing its efficiency gains. Early interest from Microsoft’s Azure AI team and Google Cloud suggests that cloud operators may integrate the approach into their next‑generation inference stacks. Parallel hardware research—such as the memristor‑based brain‑inspired chips unveiled last month—could further amplify the savings if the software and silicon converge. The next few months should reveal whether the neuro‑symbolic design becomes a new standard for sustainable AI or remains a niche solution for specialised applications.
Sources
Back to AIPULSEN