Andrea Wills: The biggest danger AI poses to newsrooms and why a serious hoax is inevitable

This blog comes from our Impress Insights newsletter! If you want to be the first to hear the latest analysis and opinions from industry experts, you can sign up here!
Stories about the good, bad and ugly sides of Artificial Intelligence yo-yo in and out of the news on an almost daily basis.
I’m a bit of an AI cynic myself and don’t tend to be particularly impressed by the latest advances in AI hardware and software. But that doesn’t mean I shy away from keeping up with the way it is impacting journalism practices.
It was the launch of ChatGPT in November 2022, followed by Bard six months later, that began to highlight the pitfalls that generative AI solutions create for publishers whether in the form of text, images, video or audio.
Personally, I’ve had some fun with different generative AI models, writing poems, a resume for my sister, and producing various realistic looking images based on my prompts.
However, the unethical use of such models has left me cold. I couldn’t believe that a German news outlet approved the publication of a completely made-up ‘‘exclusive interview” with F1 champion Michel Schumacher, who’s not been seen in public since a near fatal skiing accident in 2013. Nor the TikTok clips, now removed, showing AI-generated likenesses of missing or murdered children, including Madeleine McCann and James Bulger. Or the deep fake scam video advert, which shows a very convincing synthetic Martin Lewis, apparently endorsing a new project from Elon Musk called ‘Quantum AI’.
Generative AI learns from available data and generates new data from its knowledge – and therein lies the problem for journalists. Alongside the intentional use of AI by unscrupulous people to deliberately mislead it’s important to know that generative AI frequently makes mistakes producing inaccurate and often downright fake content.
Therefore, it can’t be used by journalists as a credible source of information.
This specific downside – generating new data from its knowledge – has already led to numerous problems, including the invention of non-existent sources, like those described in the Guardian in April 2023 after it had discovered ChatGPT had made up Guardian articles as fake sources. This has been described as the “hallucinatory element” of the language model – it literally makes up information that isn’t there.
In America, a judge fined lawyers $5000 in an aviation injury claim. They had submitted bogus case law created by ChatGPT. The judge said “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
The same gatekeeping role applies to publishers, particularly in relation to the use of digital third-party content. Can you really believe all you see, hear and read? I believe it’s only a matter of time before there’s a digital hoax that fools even the most careful journalists and has real life consequences. In May 2023, for example, many people were caught out when a fake image went viral, that apparently showed a large explosion near the Pentagon in the USA.
Unfortunately, there’s no completely reliable way of technically detecting a deepfake although various companies are trying to develop such a tool. Journalists need to employ their verification skills, editorial judgement and trust their gut instincts and, if in doubt, don’t publish.
What seems to be working well in some newsrooms is the use of generative AI for those repetitive and less interesting tasks like writing summaries, bullet points and suggesting headlines about a specific story. But beware – even if you use a generative AI language model to work with your own material, it can still take things that aren’t even there or use a quote in a headline that wasn’t said. That’s why the Impress Standards Code says, “Publishers must ensure human editorial oversight and clear labelling of AI-generated content.”
But perhaps the biggest danger for publishers now is the lack of knowledge amongst their news teams about AI. At the very least, alert them to the “hallucinatory element” and the dangers it brings and make sure they are accountable for their reporting, whatever technology they use.
By Andrea Wills
Journalist, Broadcaster and Media Consultant
About Andrea Wills
Andrea Wills has exceptional experience in broadcasting regulation, standard setting, and investigating serious editorial failings in the UK and Australia. She was Independent Editorial Adviser to the BBC Trust and investigated over 60 complaints about BBC content over the decade it existed. She began her career as a journalist and news editor in local radio, moved to television as an executive producer, before joining the BBC’s Editorial Policy team as its Chief Adviser. In Australia she worked for the ABC in Sydney, conducting independent reviews of broadcast content, developing editorial and media ethics standards, and training senior journalists.
About Impress
Impress is a champion for news that can be trusted. We are here to make sure news providers can publish with integrity; and the public can engage in an ever-changing media landscape with confidence. We set the highest regulatory standards for news, offer education to help people make informed choices and provide resolution when disputes arise.
Media enquiries
Louie Chandler: louie@impressorg.com / 02033076778
Latest Posts
- Impress CEO asks for new cost-protection legislation for publishers in letter to Culture Secretary
- Impress members nominated for British Journalism Awards 2023
- Impress Insights: To name or not to name: When does the public interest outweigh a right to privacy?
- The Online Safety Act: How we can help you comply with the new law
- Spotlight On | Ruth Hogarth, editor, Arts Professional
- 10 Years of the Royal Charter: Building A Sustainable Legacy For Press Regulation