Cybersecurity researchers have uncovered critical remote code execution vulnerabilities impacting major artificial intelligence (AI) inference engines, including those from Meta, Nvidia, Microsoft, and open-source PyTorch projects such as vLLM and SGLang.
"These vulnerabilities all traced back to the same root cause: the overlooked unsafe use of ZeroMQ (ZMQ) and Python's pickle deserialization," Oligo Security researcher Avi Lumelsky said in a report published Thursday.
At its core, the issue stems from what has been described as a pattern called ShadowMQ , in which the insecure deserialization logic has propagated to several projects as a result of code reuse.
The root cause is a vulnerability in Meta's Llama large language model (LLM) framework ( CVE-2024-50050 , CVSS score: 6.3

The Hacker News

Raw Story
Salon
The Fashion Spot