Evolution of machine learning: Akinyemi Arabambi on innovation, challenges, and career growth
Machine learning has become a driving force behind technological advancements, transforming industries and reshaping the way businesses operate. However, the journey from understanding the fundamentals to deploying real-world solutions comes with its own set of challenges.
Akinyemi Arabambi, a machine learning engineer with two years of experience, shares his insights on the evolving landscape of AI, the key challenges in the field, and how aspiring professionals can navigate their careers successfully.
What inspired you to pursue a career in machine learning?
I’ve always been passionate about problem-solving and technology advancement. My journey in machine learning started during my master’s degree project in Norway when I discovered how data-driven insight can lead to powerful decision-making. My background in mechanical engineering and energy systems gave me a strong foundation in mathematical modelling and optimisation, which has been beneficial in my AI journey. To enhance my skills, I took online courses, worked on projects with different groups and joined AI and Data science-focused communities. However, my exposure to artificial intelligence, particularly its ability to extract insights from data and make autonomous decisions, fuelled my desire to dive deeper into the field. This led me to experiment with Python and machine learning frameworks, building small projects and contributing to open-source communities, which have been very helpful.
What are some of the biggest challenges in deploying machine learning models?
One of the biggest challenges is bridging the gap between research and real-world application. Many models perform well in controlled environments but struggle in production due to issues like data drift, model interpretability, and scalability.
Another challenge is ensuring ethical AI practices. Bias in training data can lead to unfair predictions, so it’s important to implement responsible AI techniques. Additionally, deploying ML models at scale requires strong software engineering skills, efficient infrastructure, and continuous monitoring to maintain model performance over time.
Can you share a project you’ve worked on that had a significant impact?
One of my most impactful projects was developing a Retrieval-Augmented Generation (RAG) system on IBM Power10, leveraging ‘DB2 for i’ as the backend database. The system was designed to improve enterprise knowledge retrieval by integrating natural language processing with a vector database. This project was significant because it enabled businesses to query large-structured datasets conversationally, enhancing decision-making and efficiency. It also tackled scalability challenges on system architecture, optimising inference speed and reducing computational overhead. Another meaningful project was building an AI-powered education platform that creates a personalised learning experience for learners.
Many people think machine learning is all about building models. What are some overlooked aspects of the field?
That’s a common misconception. While building models is an important aspect, there’s much more to machine learning. A significant amount of time is spent on data preprocessing, feature engineering, and model evaluation. Without high-quality data, even the best algorithms won’t perform well.
Additionally, deploying and maintaining models in production is another crucial but often overlooked area. It involves working with cloud platforms, setting up monitoring pipelines, and ensuring models remain effective over time. Machine learning is a blend of data science, software engineering, and business problem-solving. Understanding the business problem is just as important as technical skills.
What tools and technologies do you primarily work with?
I work across a variety of tech stacks depending on the needs of each project. Some of the main tools and frameworks I use include programming languages like Python, C++, and Rust. For machine learning, I work with frameworks such as PyTorch, TensorFlow, and scikit-learn. I also use advanced models in the world of natural language processing, like Llama.cpp, vLLM, Ollama, and Hugging Face Transformers.
When it comes to data storage and vector databases, I often leverage services like Pinecone, Milvus, and DB2 for i. For MLOps and deployment, I rely on tools such as Docker, Podman, Kubernetes, and Node-RED. As for compute and hardware, I utilise IBM Power10, which takes advantage of MMA for acceleration, as well as GPUs and various cloud-based AI services like AWS and Azure. Additionally, I build custom AI pipelines, embeddings, and real-time inference systems tailored for specific enterprise needs.
How do you stay updated with the latest trends in AI and machine learning?
AI is evolving rapidly, so I read papers from multiple research papers like arXiv, NeurIPS, ICML, Towards data science, Google AI blog and CVPR, which provide valuable insights into recent advancements. I also attend conferences like IBM iUG and AI industry summits. In addition, I engage with open-source projects, which helps me to stay hands-on with new innovations. I also follow discussions on Hugging Face, Reddit ML, and LinkedIn AI communities. Networking with other professionals helps me stay updated and exchange knowledge on best practices. Most times, the best way to stay ahead is to build and experiment with new tools, whether it’s a new transformer model or a novel optimisation technique.
What advice would you give to someone looking to break into the field of machine learning?
Start with the fundamentals—learn Python, statistics, and linear algebra. Work on projects that interest you and apply what you learn to real-world problems.
Joining online communities, contributing to open-source projects, and networking with professionals can accelerate your growth. Most importantly, stay curious and never stop learning. The field is constantly evolving, and adaptability is key to long-term success.
What do you see as the future of machine learning?
Machine learning is moving towards more efficient and responsible AI. Explainability and fairness in AI models will become increasingly important. We’re also seeing a shift towards automated machine learning (AutoML), which will make AI more accessible to non-experts.
In the coming years, we can expect greater integration of AI across industries, from healthcare to finance. With advancements in computing power and data availability, machine learning will continue to drive innovation and transform the way we interact with technology.

Get the latest news delivered straight to your inbox every day of the week. Stay informed with the Guardian’s leading coverage of Nigerian and world news, business, technology and sports.
0 Comments
We will review and take appropriate action.