Critical Security Flaw Discovered in Llama Due to Unauthenticated Memory Leak
llama
| Source: Mastodon | Original article
Cyera researchers uncover critical memory leak in Ollama.
Cyera Research has uncovered a critical unauthenticated memory leak vulnerability in Ollama, a popular platform for running large language models locally. This bug, dubbed "Bleeding Llama," has a CVSS score of 9.3, indicating a severe risk to users' AI data. The vulnerability allows attackers to exploit Ollama's GGUF tensor parsing, potentially exposing sensitive information from over 300,000 deployments.
This discovery matters because Ollama is widely used for local large language model inference, and an exploit could have significant consequences for users' data security. As we reported on May 4, Ollama had just released version 0.23.0, but it appears this update did not address the memory leak issue. The fact that this is an unauthenticated vulnerability makes it particularly concerning, as attackers do not need login credentials to exploit it.
As a follow-up to this news, we can expect Ollama to release a patch to fix the vulnerability, and users should prioritize updating their deployments as soon as possible. Cyera Research has already published its findings, and the security community is likely to scrutinize Ollama's response to this critical issue. Users should monitor the situation closely and take immediate action to protect their AI data from potential exploitation.
Sources
Back to AIPULSEN