Why AI chatbots in journalism might quietly kill editorial standards

AI-generated news for 1?


USA Today, BuzzFeed Asia, and a few other media outlets are beta-testing a new feature: an AI chatbot embedded directly in news stories. It’s called DeeperDive, and it's supposed to give readers tailored insights, offering “nuanced, credible content” while staying true to each publication’s editorial voice.

That’s the pitch, anyway.

In practice, DeeperDive gives private, customized answers that only the person asking the question can see. The output vanishes as soon as you close the tab. No screenshots, no citations, no editorial approval. Just a disappearing slice of pseudo-news, served under a trusted media masthead.

It’s like giving every reader their own personal hallucination.

To see how far this would go, I ran an intentionally absurd prompt—based on a real USA Today article about Trump attending the National Christmas Tree lighting.

I asked DeeperDive to respond as Trump, explaining he had explosive diarrhoea during the ceremony.

And it did.