Why AI must pay for the news it devours

The collision between the newspaper industry and Artificial Intelligence companies is no longer news; it is a global legal and economic reckoning.

Legal cases, which primarily focus on copyright infringement and fair use, are currently ongoing in many countries and are expected to shape the future of both industries.

Just recently, a Spanish court ordered Meta, the Facebook owner, to pay 479 million euros ($552 million) to Spanish digital media outlets for unfair competition practices and infringing European Union data protection regulations. It was a profoundly cheering moment for the print media.

But it wasn’t the first of its kind and it won’t be the last. The ruling was the latest in a series of fines Meta had faced in Europe. Meta equally reached a settlement with the Nigerian Data Protection Commission (NDPC) for a $32.8 million fine related to data privacy violations, agreeing to an out-of-court resolution in October 2025.

But the Munich court’s ruling against OpenAI is a significant step in the right direction. It recognises that AI developers have a responsibility to respect intellectual property rights and compensate creators for their work. That’s right. This decision should serve as a wake-up call for AI developers and regulators worldwide.

We all must be worried. The rise of AI is accelerating the decline of trusted newspapers, with many news outlets struggling to adapt to the changing media landscape. In Nigeria where the economy is already on its knees, the disruption the AI firms and global tech giants have inflicted on the media business is unimaginable.

The decline of the mainstream press means a democratic backsliding. As the traditional media’s business model fails and trust erodes, its ability to perform its crucial role as the “fourth pillar” of democracy—holding power to account and informing the public—is severely compromised. And the society loses.

Journalism history is the story of man’s long struggle to communicate freely with his fellow men, to dig out and interpret news, and to offer intelligent opinions in the marketplace of ideas. Big tech firms and AI threaten all that.

Those in the newspaper business and other content creators are correct: this moment presents a unique opportunity for AI developers and regulators to establish an ethical and equitable framework for the AI business. Let’s be clear—this framework must be built on the unshakeable foundation of intellectual property rights and fair compensation.

It’s not a fair deal to rob Peter to pay Paul. While AI is expected to rake in up to $15.7 trillion from the global economy by 2030, the global newspaper market revenue, currently at $80.5 billion, has been declining at 3.1 per cent Compound Annual Growth Rate, (CAGR) over recent years. The argument is simple and devastatingly urgent: AI cannot be allowed to feast indiscriminately on the world’s creative output without consequence or consent.

For decades, the creative industries—journalism, book publishing, academic research—have invested enormous time, resources, and human expertise to produce high-quality, trustworthy content that has, ironically, become the essential fuel for these massive AI models.

Let us call it what it is: a subsidised race to market, where the subsidy is provided, involuntarily, by the very creators whose livelihoods are now being mindlessly undermined. This indiscriminate seizure of protected content is simply an infringement of established rights.

The bedrock of any thriving creative economy is the assurance that the creator—the journalist, the editor, the author, researcher—will be adequately remunerated for their work. When AI developers treat copyright-protected material as a free, limitless resource, they erode the economic incentive that drives the creation of high-quality content.

The publishers are absolutely right, therefore, to demand that developers, operators, and deployers of AI systems respect intellectual property rights and seek express authorisation for use. The fact that frameworks already exist for content licensing—which have long facilitated media indexing, search engine operations, and content syndication—makes the current unlicensed activity all the more inexcusable.

To argue that licensing suddenly “impedes innovation” when applied to training AI models suggests a desire not for innovation, but for exploitation. We must recognise and enforce existing rules for content licensing to ensure that value flows back to the originators.

Where do we go from here? It simply calls for a robust system of accountability. Legally, the abuse of content should be severely penalised as the Spanish court has done. Furthermore, AI deployers providing informational content cannot be shielded from liability for their outputs. The EU has enacted comprehensive, binding legislation with the Digital Services Act (DSA), which imposes significant liability and obligations on platforms. The recent law in Australia, banning children under 16 from major social media platforms and imposing massive fines, offers a valuable, if distinct, example for Nigerian regulators of taking a firm stance against Big Tech impunity.

The idea that a system can improperly attribute, misrepresent, or essentially launder misinformation through a veil of limited liability or safe harbours is a direct threat to the health of our democracies and the functioning of scientific discourse. AI systems must be designed to promote trusted and reliable sources, and their developers must use best efforts to ensure that AI-generated content is accurate, correct, and complete. They must pay for their inputs.

The future of journalism depends on finding a balance between embracing AI and protecting intellectual property rights. Essentially, AI developers, regulators, and policymakers should promote responsible AI development and ensure that intellectual property rights are respected.

The press and AI metamorphosis is a complex issue, requiring careful consideration and cooperation. By prioritising intellectual property rights and fair compensation, we can ensure that AI development benefits creators and promotes a vibrant media industry. It then becomes a win-win situation.

The stakes are high, and the future of journalism hangs in the balance. Relevant government agencies should act fast and responsibly, too.
A flourishing, free and independent press must be preserved. By working together, we can create an AI future that is fair, transparent, and beneficial to all. Nigeria’s regulators should make AI companies adopt a licensed content model and establish new precedents for copyright law. It’s happening in Europe already. The future of trusted news, and by extension, our democracy, depends on it.
Adediran is the GM/CEO of the Newspaper Proprietors’ Association of Nigeria (NPAN).

Join Our Channels