Nigeria’s AI Future Needs Ethics, Transparency – Oladipupo

Nigeria is steadily adopting artificial intelligence across sectors like healthcare, finance, and governance. Samuel Oladipupo, a pioneer in AI and intelligent automation, speaks about the importance of transparency, intellectual integrity, and community engagement in developing AI solutions for the country.

As someone pioneering AI and intelligent automation in healthcare, how do you ensure that the systems you develop uphold intellectual integrity, particularly when dealing with sensitive data in regulatory-heavy environments like 340B compliance?

When deploying AI in healthcare, whether it’s the intelligent automation work I’m leading at Craneware for 340B compliance or the exam proctoring system I built at Olabisi Onabanjo University for 18,000 candidates, I’ve learned one key truth: trust isn’t built in the algorithm; it’s built in transparency.

During the rollout of the proctoring system in 2020, students and faculty were understandably worried. We used facial recognition to identify candidates, detect movements, and flag anomalies. It was powerful for its time, but it also raised valid concerns. To address this, we invited everyone into the process explaining what data was collected, what wasn’t, and ensuring every flag could be audited and reviewed by human proctors.

We apply the same principles today at Craneware. In healthcare, we’re not just dealing with data but with people’s access to medicine and livelihoods. Every AI decision must therefore be explainable, auditable, and reversible. If you can’t explain how your AI made a decision, you shouldn’t use it in critical systems.

In your experience deploying AI-driven systems in high-stakes domains like healthcare and finance, how do you build trust not just in the technology but also in the teams and institutions adopting it?

Trust must be built from the start; it cannot be added later. When we built the PAPSS currency exchange platform, which processed over $50 million across African markets, the greatest challenge was not the technology it was trust.

We achieved that through radical transparency. Every transaction on the blockchain could be independently verified. There were no black boxes, no “trust us” scenarios; the system itself was the evidence. But I’ve also learned that technology alone doesn’t create trust people do. We invested heavily in education, bringing financial institutions into the process, listening to their feedback, and adapting. When users feel ownership, trust follows.

Given your cross-border work with blockchain and CBDCs, what lessons have you learned about transparency and trust that can strengthen Nigeria’s AI ecosystem?

The most important lesson from blockchain is to build systems that are transparent by design, not just when problems arise. On the PAPSS platform, users didn’t have to rely on trust they could verify every transaction.

Nigeria’s AI ecosystem needs the same foundation. People are eager yet sceptical because they have seen too many overpromises. The solution is openness: make AI systems auditable, publish methodologies, and involve independent review boards. Civil society, academia, and communities should all have a voice before deployment.

Another key point is to start with visible impact. Blockchain adoption grew when people saw immediate benefits like lower fees and faster transfers. AI in Nigeria should do the same show measurable outcomes such as reduced hospital wait times or improved farm yields. And above all, engage communities early. During my time at the Bantu Blockchain Foundation, we held workshops across the continent. Those engagements didn’t just build acceptance; they improved the technology.

In Nigeria, there’s growing enthusiasm for AI but also scepticism due to weak regulation and infrastructure. How can developers balance innovation with caution?

I often tell young developers: innovation without integrity is chaos with better marketing. Our responsibility is not only to build what’s possible but to build what’s responsible.

During my work with the eNaira Nigeria’s central bank digital currency we had to balance speed and caution. We layered development carefully, beginning with core security and compliance before adding features. That approach ensured trust and reliability.

Nigeria can do the same with AI. Start with low-risk, high-impact applications such as hospital scheduling or inventory management before moving to areas like AI-powered credit scoring, where errors can harm people. Honesty is key acknowledge data gaps and limitations. Responsible innovation isn’t slow innovation; well-planned systems often move faster because they avoid costly rework later.

Many Nigerian institutions lack reliable data infrastructure. How does this affect AI implementation and trustworthiness?

It’s a serious challenge. Poor data quality doesn’t just make AI unreliable it makes it dangerous. Training on incomplete or biased datasets risks automating inequality.

At OOU, we often had to build the data collection infrastructure before even considering AI. If an AI trained on private hospital data in Lagos is deployed in a rural Taraba clinic, it will fail because the context differs entirely.

Nigeria needs national standards for data verification and management. We must be honest about dataset readiness before deploying AI. Building slowly on a solid foundation is far better than racing ahead and amplifying existing flaws at machine speed. Reliable data is the bedrock of trustworthy AI.

How can Nigeria foster a culture of intellectual integrity in AI development, especially with limited ethical and legal frameworks?

Nigeria stands at a crossroads. We can either remain consumers of foreign AI tools or become innovators creating solutions for our realities. To achieve the latter, ethics must come first.

At Bantu Blockchain Foundation, we prioritised community engagement before deployment. Nigeria needs universities to do the same teaching AI ethics alongside engineering. We should establish a Nigerian AI Safety Institute to set standards and hold developers accountable, much like medical professionals are held to oaths.

Transparency must become a competitive advantage. Companies should publish ethics reports showing how they test for bias and protect privacy. And there must be consequences when harm occurs due to negligence.

The good news is that Nigeria has always innovated out of necessity. We don’t need to wait for perfect laws. We can begin building a culture of integrity now project by project, developer by developer.

What role should professional technologists play in shaping public policy and ethical frameworks for AI in healthcare and finance?

Technologists cannot hide behind the phrase “we just build the tools.” We’re shaping systems that impact millions of lives. Therefore, we must engage policymakers early not to dominate, but to inform.

Having worked in healthcare at Craneware, blockchain finance with Interstellar, and digital currency with eNaira, I’ve seen the importance of bridging technical knowledge and real-world impact. Policymakers need clarity about what AI can and cannot do.

We must translate technical concepts into human terms, be transparent about limitations, and include diverse voices in these discussions. The blockchain industry evolved from chaos to structure because stakeholders collaborated. AI must follow that path but faster, because the stakes are higher.

With the increasing use of AI in healthcare and governance, how can Nigeria address algorithmic bias in such a diverse country?

Nigeria’s diversity demands deliberate inclusion in AI design. I learned this during the OOU proctoring project. Our facial recognition system flagged some students unfairly due to lighting conditions or cultural differences in behaviour. We retrained the models and kept human proctors in the loop to ensure fairness.

Bias in AI can’t be eliminated by chance. It requires diverse teams, mandatory bias testing, and continuous monitoring. Systems should be evaluated across all demographics before deployment. If fairness isn’t proven, the AI should not be released.

AI outputs should also reflect uncertainty. A system could say, “I’m 70 per cent confident in this diagnosis but have limited data for this demographic.” That level of honesty builds trust.

Finally, there must be accessible appeal channels. If an AI denies a loan or flags an exam, people should have the right to challenge it before a qualified human reviewer. AI should support, not replace, human judgment.

Looking ahead, what gives you hope about Nigeria’s path in AI development?

I’m optimistic because the talent is here. I’ve worked with brilliant Nigerian engineers who can compete globally. What we need now is leadership and discipline to do AI right, not fast or reckless.

That means investing in data infrastructure, building ethical frameworks, teaching responsibility alongside innovation, and asking not only “Can we?” but “Should we?”

From my experience across Africa, Europe, and North America, the countries that lead in AI aren’t those that move fastest, but those that move wisely. Nigeria has the ingenuity and resilience to be one of them but we must start now, with integrity at the core of everything we build.

Join Our Channels