The New York Times issued an editor's note revealing that a reporter used an AI-generated summary that was incorrectly presented as a direct quote from Canadian Conservative leader Pierre Poilievre. The AI had fabricated a quote about 'turncoats' that Poilievre never actually said, leading to a correction. This incident highlights ongoing concerns about AI hallucination and the importance of fact-checking AI-generated content in journalism.
Background
AI language models are increasingly being used in newsrooms for various tasks, but they are known to sometimes generate false or misleading information, a phenomenon known as 'hallucination'.
- Source
- Simon Willison
- Published
- May 11, 2026 at 07:58 AM
- Score
- 7.0 / 10