As the great and good of Silicon Valley descended on Bletchley Park for the UK’s inaugural AI summit last week, news publishers were wary.
While Elon Musk was breezily predicting a world without work, journalists worried this vision of the future would be built on the backs of their hard graft – and with no credit.
“Our very presence [online] is giving credibility to these platforms that otherwise would be filled with clickbait nonsense and unregulated information,” said Katie French, regional group editor at Newsquest, which owns titles including the Brighton Argus and the Oxford Mail.
For more than two decades, news outlets have been grappling with how to adapt to the digital world. Now, though, they face an existential threat.
The rapid rise of artificial intelligence (AI) has caught nearly everyone by surprise and sparked panic across the creative industries. Professionals ranging from script writers and musicians to journalists and authors worry that AI will ingest their work and regurgitate it, giving people a way to skirt paywalls, royalty payments and other fair rewards for their hard work.
For news outlets, the emergence of the new technology brings back uncomfortable memories of the growth of tech giants like Facebook and Google, which upended the way people consume news and cannibalised revenues in the process.
Publishers are keen not to go down that path again and are taking a stand.
“As with past ‘disruptive’ Silicon Valley models, generative AI investors are banking on forgiveness instead of asking permission,” warns US lobby group the News Media Alliance.
The news industry is not alone in its struggle against AI yet the issue is particularly severe for publishers. In July, the Telegraph revealed allegations that Google harvested around 1m online news articles from the Daily Mail and CNN to help train its chatbot, Bard.
New research by the News Media Alliance, which represents more than 2,000 publishers, found that tech companies “disproportionately” use online news, magazine and digital media content to train their AI software.
News and digital media ranks third among all categories of sources used by Google’s C4 training dataset, while half of the top 10 sites in the set are news outlets.
For publishers, untrammelled use of the technology also represents a significant reputational risk.
The Guardian last week discovered that Microsoft had inserted a “crass” AI-generated poll into one of its stories that asked readers to speculate on the cause of someone’s death.
Anna Bateson, Guardian chief executive, branded the move “deeply damaging” and demanded a meeting with a senior Microsoft executive.
More fundamentally, though, AI poses a serious threat to the business model of journalism.
Publishers argue that the use of articles without permission represents a major breach of copyright, both in the training and output of AI software.
Tech giants such as Google are hoping to use AI to summarise news articles – a move that would reduce the need for readers to click through to the publisher’s website, further depriving them of advertising revenue.
In a white paper published last week, the News Media Alliance wrote: “This irreparably damages publishers’ businesses, which depend on relationships with their readers, web traffic, and the trustworthiness of brands built over decades.”
Industry bosses have warned that any threat to journalism would in turn damage democracy. The News Media Association (NMA), which represents publishers in the UK, has warned that AI risks creating a flood of fake news that will “pollute human knowledge” and leave society “swamped with mis- and disinformation”.
Many outlets have begun to take rearguard action. Organisations including the BBC, Guardian, New York Times and CNN have all blocked chatbots such as ChatGPT from trawling their websites. Yet there are fears that these measures will do little to prevent a tide of copyright infringement.
Tech giants have defended their actions, largely relying on the “fair use” exemption – known in the UK as “fair dealing” – to justify their use of publishers’ content.
Microsoft has argued that strict copyright laws would doom the development of AI in Britain.
In evidence to the House of Lords Communications and Digital Committee, the US tech giant wrote: “If licensing was required to train AI models on legally accessed data, this would be prohibitive and could shut down development of large scale AI models in the UK.”
However, the “fair dealing” defence has failed to pass muster with news outlets, who argue that their articles are being used for commercial purposes, meaning tech firms are acting unlawfully.
Iona Silverman, partner at law firm Freeths, says: “As soon as something’s done for a commercial purpose, it’s unlikely to be fair dealing, so I think you’d struggle to run that argument when most of the tech giants are doing this massively for commercial purposes and can afford to pay for the content.”
Publishers including the Daily Mail are considering taking legal action as they seek compensation for the use of their work. Others argue that the responsibility should lie with governments and regulators.
Dr Moiya McTier, spokesperson for the Human Artistry Campaign, says: “I feel this needs to be legislated more than litigated.”
The Government has backtracked on controversial plans to relax copyright laws so that AI companies could “mine” text and data.
But unlike others, including the EU and Japan, it has not laid out laws to clarify how AI companies can train their systems. Instead, ministers have encouraged the creative and tech industries to agree on a voluntary code of conduct, brokered by the Intellectual Property Office (IPO).
A code had been expected by this summer but talks are believed to have stalled amid bad-tempered discussions between the companies. The IPO said it now hopes to reach an agreement by the end of the year.
In the meantime, publishers are also exploring possible licensing deals that would see tech companies pay for the use of news articles. This would echo similar content licensing deals struck with Google and Facebook in Australia and under consideration in the UK under the new Digital Markets Unit.
Ms Silverman describes licensing as the “realistic commercial solution” to the impasse.
“That way, the publishers would benefit because they would get royalties and the people making AI would be able to make content knowing that they’re allowed to do it,” she says. “They’re not going to like paying for it, but at least that way they know what they’re allowed to do.”
Even if tech firms can be convinced to pay for news content, publishers may not be satisfied.
The News Media Alliance warns that copyright laws alone will not be enough if tech giants can use their market power to extract “exploitative and anti-competitive terms”, adding that a rebalancing of market power through methods such as collective bargaining is necessary.
In the UK, too, these options are being explored. Owen Meredith, chief executive of the NMA, says news outlets are looking at such proposals to see if they will translate in Britain.
As the battle lines are drawn for the new era of the internet, publishers are determined not to repeat the mistakes of the past. This time, they are ready for the fight.