AI Product Manager – Friend or Foe?

In some offices, technical leads usually work far away from customers but not me. You might find me leaning over a desk, looking at a service log with a support rep or scrolling through user complaints that do not show up in dashboards but always show up in sentiment, because I pay attention to what people struggle with, not just the systems that caused it. And this is what makes me unusual in the world of artificial intelligence.

I work at Access Bank, one of the major players in West Africa’s financial sector. My job, broadly speaking, is to help the AI-powered products that the bank uses in its operations behave better. Not just faster, but better, in the way that matters when someone is trying to transfer money, report a fraud, or understand why a payment did not go through. I did not come into this line of work from a technical background. I came from the part of the business where people answer phones and solve everyday problems but that beginning has shaped my current career path. “When you’ve had to explain a broken system to a real customer, you stop thinking about features,” I often say. “You start thinking about clarity.”

In my role, I lead the development of tools that quietly handle thousands of actions every day. Fraud detectors that spot patterns and flag strange activity, bots that respond to common questions before a human needs to step in, and workflows that reduce how often someone needs to print, sign, scan, and repeat. My contributions are easy to miss if you are only looking at the surface but inside the bank, they have changed the way things run and how people feel when they use the system.

Not long ago, the fraud alert system at Access Bank was overreacting. It flagged legitimate transactions too often, and customers were getting nervous calls about routine payments. In addressing this, I did not chase a technical solution right away. Instead, I asked to see what kinds of cases were being reviewed most often. From there, patterns emerged with certain vendors, transaction sizes, times of day. So, my team introduced filters based on that data, not just on anomaly scores. Within a few months, the false alerts had dropped sharply, and the review team was breathing easier.

One of my favourite projects to date is creating an AI-powered customer service assistant for the bank. After testing, I found that response times were shortened by more than 40%, and the system was capable of handling more than 80% of customer inquiries. Still, I observed a decline in satisfaction associated with the AI’s handling of delicate or emotionally sensitive complaints. So, I responded by setting up a sentiment-aware escalation system that flagged complicated requests and redirect them to human agents using natural language processing tools. This tweak eventually increased overall user satisfaction by restoring consumer trust.

This kind of impact reflects what is currently happening across the industry. A 2023 McKinsey report identified banking as one of the top sectors seeing real value from AI particularly in customer experience, fraud detection, and process automation, all areas my work touches daily.
What also stands out in my work is not just the use of AI, it is the restraint. I do not deploy automation just because it is available. I ask whether it belongs. When building a product recommendation engine for a retail client, I noticed early on that the suggestions felt repetitive. Instead of tuning the algorithm, I stepped back. “Let us look at how people actually browse,” I suggested. So, we did and we found that users moved in nonlinear ways like browsing without buying, comparing across categories, and saving items for later. By mapping those patterns, we reworked how the system made suggestions. Sure enough, customers responded and sales rose but what pleased me most was that complaints dropped.

I also think about ethics a lot, not just in theory, but in my day-to-day work. So I make sure our products are actually based on the ethical guidelines we have created.

I come back to them during project reviews, especially when something feels off. I will ask, “Where’s this data coming from?” or “Would a regular person understand this result?” or even, “Who does this tool still fail?” These are not just checkboxes to me, they are ways to keep the work grounded. I was glad to see the Stanford AI Index Report say something similar, that transparency, human oversight, and constant bias checks are especially important in fields like finance, where decisions affect people’s lives.

There are a lot of professionals in tech who can tell you what their product does. I can do that too but I will also tell you why it matters, and to whom. My work is not powered by grand claims. It is powered by clear thinking, careful choices, and a refusal to build products people cannot understand. It is clear that I am not the only one thinking this way. As Scientific American pointed out in 2023, “If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?” That aligns with what I have seen up close which is that no matter how advanced a system is, if users can not make sense of it, they would not trust it and tech that is not trusted simply would not be used.

So when people ask me about the future of AI in banking, I shrug. “The tech will keep improving. That’s not the hard part,” I say. “The hard part is making sure we’re still building things people can live with.” In a field often driven by acceleration, I try to bring a kind of intentionality that feels rare. I do not just want the system to work. I want it to make sense and that, more than any buzzword, is what real leadership looks like.

Join Our Channels