Home » Detecting hallucination in RAG | Towards Data Science

Detecting hallucination in RAG | Towards Data Science

Photo by Johannes Plenio on Unsplash

I recently started to favor Graph RAGs more than vector store-backed ones.

No offense to vector databases; they work fantastically in most cases. The caveat is that you need explicit mentions in the text to retrieve the correct context.

We have workarounds for that, and I’ve covered a few in my previous posts.

For instance, ColBERT and Multi-representation are helpful retrieval models we should consider when building RAG apps.

GraphRAGs suffer less from retrieval issues (I didn’t say they don’t suffer.) Whenever the retrieval requires some reasoning, GraphRAG performs extraordinarily.

Providing relevant context solves a key problem in LLM-based applications: hallucination. However, it does not eliminate hallucinations altogether.

When you can’t fix something, you measure it. And that’s the focus of this post. In other words, how do we evaluate RAG apps?

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *