RE: https:// mastodon.ie/@HazelChu/11642126 2212777115 If you need actual numbers from actual
microsoft
| Source: Mastodon | Original article
A recent post on Mastodon has reignited the debate over the carbon footprint of large language models (LLMs). The thread, sparked by a link to a new European Commission joint‑research‑centre report, cited figures that place the electricity consumption of the world’s biggest AI models on par with the annual power use of small nations. In response, user Hazel Chu wrote, “If you need actual numbers from actual data centres to convince people that they’re a plague we need to control,” tagging #ai, #llm, #datacentres and #energy.
The report, released last week, aggregates publicly disclosed power‑usage data from more than 30 hyperscale facilities and adds estimates for the training runs of models such as GPT‑4, Claude 2 and LLaMA‑2. It concludes that training a single state‑of‑the‑art LLM can emit up to 600 tonnes of CO₂, while inference workloads across cloud providers now account for roughly 5 percent of global data‑centre electricity demand. The authors argue that without transparent accounting, policymakers lack the evidence needed to shape effective climate‑friendly AI regulations.
The controversy matters because AI developers have long pointed to efficiency gains—hardware optimisation, model pruning and renewable‑energy contracts—as proof that the sector is self‑correcting. Critics, however, contend that the industry’s voluntary disclosures are fragmented and often omit the most power‑hungry training runs. If the European figures hold, the sector could face stricter emissions caps, mandatory reporting standards and possible carbon‑pricing mechanisms.
What to watch next: the European Union is expected to finalize its AI Act later this year, and the draft includes provisions for “high‑impact” AI systems to publish lifecycle energy reports. Meanwhile, major cloud providers have pledged to launch dashboards that show real‑time AI‑related power consumption. Industry groups such as the Green‑AI Alliance are also preparing a set of voluntary metrics that could become de‑facto standards if regulators move slowly. The coming months will reveal whether transparency initiatives can keep pace with the rapid scaling of LLMs, or whether stricter oversight will become inevitable.
Sources
Back to AIPULSEN