• Google says hackers used AI to uncover major software flaw
The Chairman, Senate Committee on ICT and Cyber Security, Afolabi Salisu (APC, Ogun), has said efforts were ongoing to review the National Data Protection Act (2023) to meet emerging threats associated with technological advancement.
Meanwhile, a criminal hacking group recently attempted to launch a widespread cyber-attack that appeared to rely on Artificial Intelligence (AI) to detect a previously unknown bug, Google said in research published on May 12, 2026, highlighting the potential threat AI poses to digital security.
At the opening of a three-day workshop, yesterday, on Data Protection Awareness Promotion organised for the Joint National Assembly Committee on ICT by Nigeria Data Protection Commission (NDPC) and Ampersand Development Partners, Salisu disclosed that since the enactment of the Act in 2023, there had been new developments such as AI and the United Nations Convention on Cyber Crimes.
The lawmaker said there is a nexus between data governance and cybercrimes; hence, the need to look at the Act and strengthen the handshakes where necessary.
According to him, we need to ensure the security of our country, particularly in cyberspace and our data governance as well as technology advancement like AI.
“As legislators, we need to know about data privacy and protection for us to be able to effectively legislate in that area. You cannot legislate in an area you are not sufficiently knowledgeable in; this workshop affords us the opportunity to build our capacity to understand modern principles of data protection and to be in a position to review the National Data Protection Act
The Chairman, House Committee on ICT and Cyber Security, Stanley Olajide (APC, Oyo) likened data to gold, noting that Nigeria’s next prosperity was not going to be oil but data.
Security experts feared for years that malicious hackers could eventually rely on AI models to identify undisclosed flaws in computer code to launch crippling attacks that are difficult to guard against. That fear was largely theoretical until now.
“We have high confidence that the actor likely leveraged an AI model to support the discovery and weaponisation of this vulnerability,” the report said.
The tech giant did not say precisely when the thwarted attack happened, whom it was targeting or which AI platform the hackers used, but the company added that it did not believe it was its own Gemini chatbot.
Google’s research arrives as the technology industry and governments, including the Donald Trump administration, re-evaluate how, and whether, to police advanced versions of AI, in large part because of growing concerns over what they mean for cybersecurity.
Flaws like the one identified by Google and the hacking group are known as “zero-day vulnerabilities” — security holes that are unknown to the software makers. They were once considered so rare and powerful that they could fetch millions of dollars on black markets used to sell hacking tools.
But new AI models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
When Mythos was announced, Anthropic said it had identified thousands of zero-day vulnerabilities “in every major operating system and every major web browser,” including many that were decades old.
The zero-day flaw was detected by the Google Threat Intelligence Group within the past few months and was exploited by “prominent cybercrime threat actors” in a script of the Python programming language. It would have allowed the hackers to bypass two-factor authentication on “a popular open-source, web-based system administration tool,” though the hackers also would have needed access to valid credentials like usernames and passwords to be successful, the company said.
Follow Us on Google News
Follow Us on Google Discover