Vectara analyzed major language models, testing the accuracy on 1,000 texts, and released the results. This evaluates how often an LLM introduces hallucinations when summarizing a document. Another reason to hire a writer and fact-checker for your AI and avoid embarrassment like this, this, this, or this. Updated 11/1/23