How many malicious docs does it take to poison an LLM? Far fewer than you might think, Anthropic warns



Posted on Wed Oct 15 2025 | 1:43 am


Anthropic’s study shows just 250 malicious documents is enough to poison massive AI models.




Search
Side Widget
You can put anything you want inside of these side widgets. They are easy to use, and feature the new Bootstrap 4 card containers!