Artificial Intelligence, legal practice, and geostrategic interests

An emanation of science and technology, Artificial Intelligence (AI) is not only fascinating, but offers pivotal transformative outcomes which positively and enduringly impacts human lives and development. AI’s exponential universal advancement in virtually all aspects of humanity’s interaction with machines is transmuting industry sectors including avionics, communications, counter-terrorism, cybersecurity, defence and healthcare.

Its penetration impinges logistics, national security, operations, programme, project and supply chain management, pedagogics, research and development, and of course, the legal profession. By 2030, AI will contribute $15.7 trillion to the global economy according to PWC.

However, AI’s striking appeal scarcely erases the unintended consequences of its universal application, like massive job losses across industries; nor does it obscure the reality that it can, and is often, put to lethal uses, like the deployment of unregulated unmanned aerial vehicles or “drones” in illegal wars of aggression in the world’s hellish trouble-spots by bad actors. Yet, whilst the inference of measured AI adoption and deployment is only too stark, that thesis immediately runs into the geopolitical minefield of sovereign autonomy.

Because upon that fundamental logic, no country is supinely bound to take orders from another, applying the international law principle of the sovereign equality: Article 2 (1) of the UN Charter 1945. Nevertheless, complexities of realpolitik impose a reality check, that is, when a pre-eminent superpower, like the United States, advocates a particular policy, and demands compliance, whether or not impinging AI, more often than not, it will have its way by sheer force of its global economic and military power dynamics.

The jury is out on whether such deployment of power enhances or diminishes moral leadership. Or, put differently, does moral leadership count for anything in these heady days of muscularity?!

Given that context and sharply focusing on law practice, evolutionary agentic AI models are configured to act autonomously, make decisions, and execute complex functionalities, thereby negating human interposition. These AI agents are capable of processing humongous amounts of data, reasoning, whilst adapting to real-time changes in their unique environments.

Accordingly, the metamorphosis from orthodox AI, to agentic AI, signifies a far-reaching progression in the mechanics of process functionalities and algorithmic complexities at scale, including reasoning, experiential learning – much like humans.

Evidential purchase for this proposition is well-established and already reshaping the landscape in five important areas: (i) Document Review and Analysis: AI-powered tools can quickly review and analyse large volumes of documents, reducing the time and transaction-costs associated with manual review. Agentic AI technology has proven extremely useful in facets of litigation and due diligence; (ii) Predictive Analytics: AI can predict case outcomes, helping lawyers strategize and make informed decisions. Indeed, by analysing past cases and outcomes, AI can identify patterns and trends that may influence the outcome of future cases;

(iii) Research Assistance: AI-powered research tools can assist lawyers in excavating relevant case law, statutes, and regulations. These tools can save time and improve the accuracy of research;

(iv) Contract Management: AI can help with contract review, drafting, and management. AI-powered contract analysis tools can identify potential issues and suggest revisions; and

(v) Client Service: AI-powered chatbots can provide basic legal information and support, enhancing client service, improving response times, streamlining costs, and boosting operational efficiency.

Nonetheless, strong countervailing arguments expose AI’s limitations. First, its transformative impact and potential notwithstanding, AI is unlikely to substitute originality of thought, originality of delivery, and originality of execution in the cross-fire of actual intellectually driven courtroom advocacy. This proposition is established on the foundational logic of human creativity. Afterall, traditional AI and agentic AI, are not inherently creative. Because the “intelligence” of AI is only activated when it is developed and sustained by algorithmic data, which was itself established, ab initio, via human intermediation, not sorcery!

Second, the thrust of AI’s predictive capabilities in litigation are well documented. However, the uniqueness of each case before a judge invokes the ranking probability of novel issues, new arguments, and case distinctions which disapply the canons of stare decisis and binding appellate judicial decisions on lower courts which in turn, complicates AI’s agentic predictability.

Third, job displacement and tectonic shifts in traditional employment models. Because AI will continually automate certain tasks in the legal industry, unemployment will be rife. That consequence presents a thorny bifurcation of opportunity and challenge.

The opportunity to re-skill within the technology sector where demand for roles in AI, cybersecurity, machine learning, programming, and robotics roles even within the legal sphere remains high. Conversely, the inability and/or unwillingness to adapt to the scale of galloping AI-driven change will prove challenging for those who disengage.

To put this in perspective, having electronically frontloaded Claimants’ and Defendants’ pleadings, sworn affidavits, written arguments; and, oral addresses; contested multi-jurisdictional litigations are now being conducted entirely by AI-enabled video links, with the presiding judge, counsel, solicitors, witnesses all in different countries; and crucially, binding decisions at the end!

Relative to job displacements, there is no verbatim reporter because the electronic proceedings are automatically recorded subject to the agreement of the judge; there is no secretary; there are no physical bundles of paper which could range from 50 pages to over 100,000 pages (or more!) depending on the complexity of the case and the supporting documentation; there is no one photocopying documents; nor are there photocopiers. Unsurprisingly, Deloitte projects the automation of 114,000 jobs in the sector by 2036.

Fourth, cultural and ethical concerns. Because humans are imperfect, it inexorably follows that human creations, like AI, must be imperfect. Afterall, the data, underpinning AI’s capabilities is only as good its quality. If the data input is corrupted, the output, must be corrupted applying the logic of “rubbish in rubbish out” or “RIRO” Inference? Judicial and legal authorities must ensure and qualitatively validate data integrity which underpins AI development.

However, whilst this theory is incontestable, the practical challenge is the speed of AI’s development. Whilst there will be endless discussions by bureaucrats on data integrity, local and international panels of enquiry, and inevitable delays on definitive policy, innovative AI is not static. The best developers and technopreneurs move swiftly to outperform rivals and maximise competitive advantage vis-à-vis new products in established and emerging markets.

Fifth, relates to pervasive challenges around AI, data security and hacking. For example, within the last couple of months, AI-enabled systems of leading UK retailers like Marks and Spencer, Tesco, and The Co-op have been compromised. On May 19, 2025, the Ministry of Justice (UK) confirmed that a “significant amount of personal data” of people who applied to the Legal Aid Agency since 2010 has been accessed and downloaded in a cyber-attack. The cyber attackers claim to have accessed 2.1 million pieces of data, including applicants’ criminal records.

The U.S. Treasury Department in December 2024 confirmed: “a threat actor had gained access to a key used by the vendor to secure a cloud-based service used to remotely provide technical support for Treasury Departmental Offices end users…with access to the stolen key, the threat actor was able to override the service’s security, remotely access certain Treasury DO user workstations, and access certain unclassified documents…”

Likewise, a cyberattack on Change Healthcare, a subsidiary of UnitedHealth Group, disrupted healthcare services across the U.S. culminating in a $22 million ransom payment.

Paradoxically, these highly sophisticated cyberattacks exploit AI to perpetrate serious crimes including identity theft, fraud, money-laundering, and terrorist-financing. Given these serious AI challenges in G7, what hope is there for developing countries?

Ultimately, AI is no substitute for natural creativity, intuition, and problem-solving, neither does it supplant emotional and psychological intelligence. It will not replace original thinkers in the legal, literary and scientific spheres, neither will it usurp biologists, and top-end programmers.

Equally, it is highly improbable that AI will oust commercial pilots. This is for the simple reason that sentient beings will typically make informed, and emotionally rational decisions, on whether to entrust the safety of their lives on commercial trans-Atlantic flights say, completely to the agency of AI; rather than to certified, competent, trained, and experienced human pilots.

Notwithstanding, adept and forward-thinking lawyers will need re-education, re-orientation, and re-skilling, in AI, creative problem solving, programming, cybersecurity, lifelong-learning et al; to meet the dynamic demands of clients, labour markets and a technology-powered 21st Century. The seminal inferences are above are thereby distilled into these three recommendations.

One, regulators in the legal industry and lawyers should ensure that AI systems and its underpinning data are fair, transparent, and unbiased. Plus, AI’s fallibility necessarily invokes questions as to the apportionment of risks and liability for lawyers when, not if, things go wrong. Thus, lawyers (and AI-reliant entities) should uptake indemnity insurance.

Two, AI is not the panacea to all of humanity’s problems neither does it obviate the absolute necessity for strategic thinking. Within the geostrategic context for example, is the race for the development of ever smarter and powerful AI systems given its transformative impacts across virtually all industries.

Inescapably, the contending dynamics of economic nationalism, national interests, stated and unstated technology espionage will collide in the race for first-mover advantages and tech-power. Accordingly, enlightened strategy is for each sovereign nation to develop its own AI plans to safeguard its own national interests.

Because AI and technological superiority are sources of power, and competitive advantage, no country will consciously share it except with a correspondingly beneficial quid pro quo, from a geostrategic standpoint.

Three, the disconnect between dynamic AI development and the relatively languid pace of targeted legislation and policy frameworks, which balances the necessity for innovation and, concurrently, ensures ethical safeguards in AI, demands effective coordination and collaboration between governments, lawyers, technopreneurs and proven thought leaders.

The outcome of such cross-party collaborations should be well-crafted legislation/policies which fairly meets the needs and interests of the relevant stakeholders (which will not always be aligned!), by embedding robust data security, identity verification (and periodic re-verification) to enhance security, safeguarding confidentiality, crime detection, incentivising innovation/ responsible risk-taking and, crucially, the overriding philosophical ambitions of human development and societal order.

Ojumu is the Principal Partner at Balliol Myers LP, a firm of legal practitioners and strategy consultants in Lagos, Nigeria, author of The Dynamic Intersections of Economics, Foreign Relations, Jurisprudence and National Development (2023).

Join Our Channels