Ollama addresses remote execution flaw following Wiz discovery
As generative artificial intelligence continues to grow in popularity and become mainstream, so do security issues surrounding large language models and their support services.
A new report today from Wiz Inc. details one such vulnerability discovered in Ollama, the open-source infrastructure project designed to simplify the packaging and deployment of AI models. However, in a welcome twist, those behind the project responded promptly to address it.
Ollama was founded to simplify the packaging and deployment of AI models and allow users to run those models efficiently. Inspired by Docker and used by developers, data scientists and organizations looking to efficiently package, deploy and run AI models, Ollama is popular among open-source communities and enterprises that leverage AI for various applications, from research and development to production environments.
However, as detailed in Wiz’s report, Ollama was found to have a remote code execution vulnerability, designated CVE-2024-37032. Dubbed “Probllama,” the vulnerability allows an attacker to send specially crafted HTTP requests to an Ollama application programming interface server.
The Probllama vulnerability operates through a mechanism known as path traversal, which exploits insufficient input validation in the API endpoint “/api/pull.” By crafting a malicious file containing a path traversal payload in the digest field, an attacker can manipulate the server to overwrite arbitrary files on the system.
In Docker deployments, where the server runs with root privileges, the vulnerability can be exploited to gain full remote code execution. By corrupting crucial system files, such as “/etc/ld.so.preload,” attackers are able to place malicious code that gets executed whenever a new process starts, giving them control over the server and the ability to compromise the AI models and applications hosted on it.
Wiz’s researchers found that many Ollama instances with the vulnerability were exposed to the internet, posing a significant security risk. Fortunately, though, the Ollama team’s response was highly impressive.
Ollama responded around four hours after Wiz informed it of the vulnerability on May 4 and immediately committed to creating a fix. The fix was released three days later, on May 8 — at this point, big tech companies should be taking notes.
Though the fix has been out for over a month and a half, Wiz’s researchers are advising security teams to make sure theyre running patched versions of Ollama — those released May 8 or later — to protect against the vulnerability.
Image: Ollama
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU