The BBC has escalated the global battle over AI’s use of copyrighted material, threatening legal action against the US-based artificial intelligence firm Perplexity AI. The British broadcaster alleges that Perplexity’s AI service is reproducing BBC content, sometimes verbatim, without authorization.
This marks a significant first step for the BBC, one of the world’s largest news organizations, in taking formal legal action against an AI company over content usage.
In a stern letter sent to Perplexity CEO Aravind Srinivas, the BBC demanded that the company immediately cease using BBC content, delete any material it holds, and propose financial compensation for the content already utilized. The corporation asserts that this unauthorized use constitutes copyright infringement under UK law and breaches the BBC’s terms of use.
Why the BBC is Taking Action
Beyond the fundamental issue of unauthorized use, the BBC cited its own research findings published earlier this year. This research highlighted that several popular AI chatbots, including Perplexity AI, were inaccurately summarizing news stories, some originating from the BBC.
The broadcaster found significant issues with how its content was represented in some Perplexity responses, stating that such output failed to meet its rigorous Editorial Guidelines for impartial and accurate news. The BBC argues that this inaccurate output is highly damaging to its reputation and undermines trust among audiences, including the UK licence fee payers who fund the organization.
Furthermore, the BBC’s letter stated that while it had specifically instructed two of Perplexity’s web crawlers not to access certain content via its “robots.txt” file (a common web protocol), Perplexity “is clearly not respecting robots.txt.” Compliance with this file is voluntary, but the BBC maintains its instructions were ignored. Perplexity CEO Aravind Srinivas has previously denied accusations of ignoring robots.txt instructions.
The BBC also claims that Perplexity’s service, by reproducing its content directly, competes with the BBC’s own platforms by circumventing the need for users to visit the original source. As a proactive measure to protect its intellectual property, the BBC began registering copyright for its news website in the US in October.
Perplexity’s Counter-Arguments
Perplexity has responded sharply to the BBC’s claims. In a statement, the company dismissed the allegations as “manipulative and opportunistic,” suggesting they are part of a broader effort by the BBC to “preserve Google’s illegal monopoly.” Perplexity did not elaborate on the relevance of Google to the BBC’s copyright concerns.
The AI firm also stated that the BBC holds “a fundamental misunderstanding of technology, the internet and intellectual property law.” Perplexity argues that it operates differently from companies like OpenAI or Google, claiming it does not build or train its own foundational AI models. Instead, it describes itself as an “answer engine” that acts as an interface, searching the web, identifying sources, and synthesizing information into responses for users.
Perplexity advises users to double-check the accuracy of its output – a common disclaimer for AI chatbots known to occasionally generate false information convincingly.
Wider Industry Concerns and Policy Debate
The BBC’s legal threat is not an isolated incident but reflects escalating tensions between content creators and AI companies globally. The rapid growth of generative AI models, largely trained on vast amounts of data scraped from the internet without explicit permission or licensing, has sparked widespread concern over intellectual property rights and compensation.
The Professional Publishers Association (PPA), representing over 300 UK media brands, has voiced deep concern that AI platforms are failing to uphold UK copyright law. The PPA echoed the BBC’s sentiment, stating that bots are used to “illegally scrape publishers’ content to train their models without permission or payment,” a practice threatening the UK’s £4.4 billion publishing industry and the 55,000 people it employs.
Other publishers have also taken action. Dow Jones, owner of The Wall Street Journal, sued Perplexity in October for alleged “illegal copying.” The New York Times also sent Perplexity a “cease and desist” notice. Meanwhile, several other major publishers, including the Financial Times, Axel Springer, and News Corporation, have pursued a different strategy by entering into content licensing agreements with AI firms like OpenAI. Reuters has a deal with Meta, and the Daily Mail Group has partnered with ProRata.ai.
The dispute has also influenced the policy debate in the UK. Initial government proposals suggesting an “opt-out” system for AI content usage were met with strong opposition from creative industries. Media leaders like BBC Director General Tim Davie have advocated for an “opt-in” framework, emphasizing the critical need for robust IP protection, stating that “The value is in the IP.” Culture Secretary Lisa Nandy has since offered reassurances, indicating that new rules will ensure creators are paid for their work and legislation will not harm the creative sector.
The BBC’s action against Perplexity underscores the urgent need for a clear framework governing how AI technologies interact with and utilize copyrighted journalistic content in the digital age. This conflict highlights the challenge of redefining content rights and value in the era of artificial intelligence.