As the world races to embrace artificial intelligence in almost every sphere of life, from transportation and healthcare to commerce and logistics, a Nigerian researcher is emerging as one of the most important voices guiding this transition with wisdom, caution, and vision. Angela Omozele Abhulimen, a technology researcher based in the United Kingdom, is bringing international attention to the need for ethical frameworks that protect human dignity and promote inclusive innovation in the age of intelligent automation.
Her latest work, published in the International Journal of Applied Research in Social Sciences, is titled “Ethical Considerations in AI Use for SMEs and Supply Chains: Current Challenges and Future Directions.” The paper is a compelling, research-based examination of the ethical dilemmas arising from the rapid adoption of AI technologies by small and medium-sized enterprises (SMEs) and within intricate, multinational supply chains. But more than a critique, the study offers direction—laying out practical strategies, governance insights, and context-aware recommendations for businesses, governments, and innovators seeking to deploy AI responsibly.
Co-authored with Onyinye Gift Ejike of Lagos, the paper investigates what happens when AI systems are adopted in environments without proper ethical safeguards. It questions who benefits and who is harmed when automated decisions are made by algorithms that are opaque, unexplainable, or biased. It challenges decision-makers to look beyond profit margins and efficiency metrics and consider the broader implications of AI—on human jobs, on privacy, on justice, and on societal cohesion.
Angela Abhulimen’s scholarship is grounded in urgent reality. In recent years, SMEs across Africa and around the world have begun to rely on AI-based tools for everything from sales forecasting and customer engagement to procurement and logistics. Yet, many of these enterprises lack in-house expertise to evaluate how these tools work or whether they are safe, fair, or lawful. As a result, they often become dependent on third-party solutions—many of which operate as “black boxes,” offering no transparency into how decisions are reached or how data is processed.
This dynamic, the paper warns, is fertile ground for unintended harm. A hiring algorithm could unintentionally exclude women or minority candidates. An inventory system might deprioritise certain rural vendors. A chatbot trained on biased datasets may fail to recognise the diverse cultural expressions of a global customer base. Without ethical design, even well-meaning AI systems can reinforce structural inequalities or marginalise vulnerable groups.
In supply chains, the stakes are even more complex. From factory floors to last-mile delivery, AI is automating tasks, rating suppliers, detecting fraud, and forecasting demand. These functions, while critical for operational success, are often deployed with little understanding of their ethical implications. Surveillance technologies can intrude on workers’ privacy. Risk models may penalise small suppliers who lack digital footprints. Procurement automation can unintentionally eliminate diversity in vendor selection. Angela’s work calls attention to these cascading effects and insists on building systems that are not just smart, but also just.
The paper presents a strong argument that ethical considerations must be built into AI systems from the very beginning, not bolted on after harm has occurred. To achieve this, Angela Abhulimen advocates for deliberate integration of fairness, accountability, transparency, and privacy in the design, implementation, and oversight of AI. She emphasises that these principles are not abstract ideals, but concrete pillars that determine whether AI strengthens or undermines human progress.
She proposes that businesses should adopt systems that are explainable—that is, systems where decisions can be understood by both technical and non-technical stakeholders. Transparency, in her analysis, builds trust and allows for correction when things go wrong. She also points out the necessity of accountability structures, where organizations can be held responsible for the consequences of their AI-powered decisions. Privacy, a recurring theme in her work, is presented not just as a regulatory obligation but as a fundamental human right that should be respected regardless of economic convenience.
Beyond technical design, the paper calls for institutional changes. Angela suggests that companies begin forming AI ethics review boards or committees that include not only data scientists but also ethicists, legal experts, human rights advocates, and customer representatives. She urges businesses to conduct regular impact assessments of their AI systems and to create feedback channels that empower employees and clients to flag ethical concerns.
One of the standout strengths of Angela’s work is her insistence on local relevance. Much of the global literature on AI ethics originates from North America and Europe, yet Angela highlights the importance of tailoring ethical frameworks to the social, cultural, and economic realities of African countries. She explains that ethics is not a one-size-fits-all concept; it must respond to regional values, legal traditions, and market conditions. In Nigeria, where digital infrastructure is still developing and trust in institutions is often fragile, Angela argues that proactive ethical commitments by businesses could be a powerful tool to build confidence and establish legitimacy.
To strengthen her position, Angela and her co-author analyze global case studies where AI ethics have been either neglected or well-managed. They highlight how companies like Google and Amazon have faced reputational damage and legal scrutiny due to algorithmic bias and opaque systems. They contrast this with newer companies adopting principles of responsible innovation from the ground up—openly disclosing model limitations, allowing users to challenge automated decisions, and ensuring that algorithmic systems undergo regular audits.
What distinguishes Angela Abhulimen’s approach is her multidimensional grasp of the issue. Her research spans technology, policy, economics, and social justice. She connects the dots between AI and job displacement, noting that automation can rapidly make certain roles obsolete in warehousing, customer service, and procurement. Yet, she is not alarmist. Rather, she sees this moment as one of redefinition, where industries must complement automation with workforce reskilling and inclusive innovation. She calls on governments and the private sector alike to invest in training programs that prepare workers for the AI age, especially those most likely to be displaced.
Angela also emphasises the role of civil society and academia in monitoring the ethical dimensions of AI deployments. She sees universities as incubators of critical thinking, where future innovators can be taught not just to build powerful systems, but to build them responsibly. She argues for stronger collaboration between academia, industry, and regulatory bodies in co-creating ethical standards that are dynamic and enforceable.
Across all her arguments, one message is clear: ethics is not a brake on innovation; it is its compass. Businesses that prioritise ethical AI are not limiting themselves—they are positioning themselves for sustainable success in a future that will reward transparency, trustworthiness, and fairness.
Angela Abhulimen’s growing body of work now spans themes as diverse as AI ethics, supply chain modernisation, digital transformation, and social impact technology. Her earlier papers on AI integration in dealership management systems and operational optimisation through big data analytics have already been cited in policy briefs and enterprise white papers. This latest publication only solidifies her place as a thought leader whose insights are not only academic but also actionable.
As Nigeria deepens its investment in smart infrastructure, digital entrepreneurship, and national AI strategies, thought leaders like Angela Abhulimen are essential. Her voice brings balance, ensuring that the country’s technological advancement does not outpace its commitment to justice, equity, and human dignity.
Her work is a timely reminder that the true measure of innovation is not in how quickly it changes the world, but in how responsibly it does so—and in whether it leaves the world fairer, safer, and more inclusive than it found it.