Generate and read: Oh no they didn't

2025/05/21

Share on: Facebook Twitter Reddit LinkedIn

This is a free sample of the content available on my new subscription service, Matthew Explains.

It's a deep dive into the paper "Generate rather than Retrieve: Large Language Models are Strong Context Generators" by Yu et al., arXiv version of a paper from ICLR 2023. Whereas RAGs attempt to solve the problem of factually incorrect answers from LLMs by having the LLM look up truthful answers in Wikipedia before answering, this paper attempts to solve the problems of Wikipedia by removing it from the system. I talk about how that works.

Subscribe to our newsletter