
Adversarial Attacks on RAG Systems: Poisoning the Knowledge Base
As we previously went through, a common pattern when implementing models into systems is to use RAG (retrieval augmented generation) by using domain specific data with the GenAI models. But what happens if the data source is compromised or poisoned? In this post we'll explore RAG poisoning attacks, their real-world implications and mitigation strategies to secure your AI implementations.