Google has called for collaboration in the fight against online threats, revealing that it removed 5.1 billion harmful ads and restricted another 9.1 billion globally in 2024.
Revealed in its 2024 Ads Safety Report released yesterday, Google said it leveraged advanced artificial intelligence (AI) in the fight against bad ads, scams, and misinformation online.
The American firm also suspended over 39 million advertiser accounts, many before a single ad was shown.
In the report, Google emphasised that AI is becoming a critical line of defense against increasingly sophisticated scams, fake business identities, and coordinated misinformation campaigns online.
General Manager for Ads Safety at Google, Alex Rodriguez, said: “These improvements helped us move faster, identify threats earlier, and take action before bad actors could reach users.
“That’s the real power of AI—making the internet safer not just reactively, but proactively.”
According to Google, across Africa and beyond, users are navigating a rapidly evolving digital environment where trust, safety, and transparency matter more than ever.
Google specifically cited Nigeria as one of the countries where fake ads were blocked before they were posted.
“In Nigeria, public figure impersonation scams and misleading election ads have become familiar threats,” it stated.
According to it, this was why in 2024, it updated its Misrepresentation policy, assembled a global team of over 100 experts, and took down over 700,000 scam-related advertiser accounts—contributing to a 90 per cent drop in reported impersonation scams.
With nearly half the world’s population heading to the polls in 2024, Google also expanded election ad transparency, requiring all political advertisers to verify their identities and disclose who’s paying for the message.
More than 10 million election-related ads were removed globally for failing to meet these standards.
Though the figures are global, the effects are tangible for users and businesses across Africa.
Safe online advertising supports economic inclusion—protecting small businesses, digital creators, and publishers who rely on platforms like Google to reach audiences and generate income.
For Nigeria’s growing digital economy, which is heavily reliant on trust in online transactions, Google’s enforcement efforts offer a critical layer of protection.
From preventing payment fraud to curbing the spread of AI-generated misinformation, robust ad safety measures are becoming essential infrastructure.
“We launched over 50 enhancements to our AI models in 2024. This allowed our teams to focus more on complex, high-impact investigations, while automation handled scale,” Rodriguez added.
Google emphasised that safeguarding the online ecosystem goes beyond technology. The company said it would continue to work with regulators, consumer protection agencies, and industry peers, such as through the Global Anti-Scam Alliance, to anticipate and address emerging threats.
Ultimately, the 2024 Ads Safety Report reveals more than just numbers; it highlights a growing shift in how online trust is maintained.
Meanwhile, Google has announced the rollout of Veo 2, its advanced AI-powered video generation model, for users subscribed to Gemini Advanced, the tech giant’s premium artificial intelligence plan.
This strategic move signals Google’s response to the competitive generative video AI space, as it positions Veo 2 as a direct challenger to OpenAI’s Sora platform.
The launch comes just weeks after leading AI media firm Runway unveiled the fourth generation of its video model, while also securing over $300 million in new funding, underscoring the rapidly growing investor appetite for synthetic media.