In a groundbreaking development at the intersection of artificial intelligence (AI) and medicine, Tobi Titus Oyekanmi, a computer scientist and researcher at New Mexico Highlands University, has unveiled a deep learning model that could revolutionise the way brain cancer is diagnosed and interpreted particularly in resource-limited regions.
His study, titled “Deep Learning-Based Diagnosis of Brain Cancer Using Convolutional Neural Networks on MRI Scans: A Comparative Study of Model Architectures and Tumor Classification Accuracy,” introduces a novel LightBT-CNN system that achieved 98% diagnostic accuracy on MRI brain scans.
The research, published in the American Academic Scientific Research Journal for Engineering, Technology, and Sciences (ASRJETS), positions Oyekanmi at the forefront of the emerging field of explainable artificial intelligence (XAI) for medical imaging.
Brain cancer remains one of the deadliest forms of cancer globally, with diagnosis often depending on the manual interpretation of MRI scans, a process that can be both time-consuming and prone to error.
On the goals, Oyekanmi said, “AI can help level the playing field. The goal was to build a lightweight yet powerful neural network that can analyse brain MRI scans with accuracy comparable to expert radiologists but without the need for expensive infrastructure.”
To achieve this, Oyekanmi led a multidisciplinary team comprising Peter Adigun, Nelson Azeez, and Ayodeji Adeniyi, developing the LightBT-CNN, a convolutional neural network designed to classify four tumor types (glioma, meningioma, pituitary, and healthy brain scans) using over 7,000 MRI images.
Unlike massive deep learning architectures such as VGG16 or ResNet50, which require high-end GPUs, Oyekanmi’s LightBT-CNN uses only 3.6 million trainable parameters, making it compact, cost-effective, and ideal for hospitals in developing countries.
Built in Python and TensorFlow, the model achieved exceptional precision and recall rates above 95% across all tumor classes. What sets it apart is its interpretability. Through Gradient-weighted Class Activation Mapping (Grad-CAM), the system visually highlights the brain regions influencing its predictions, giving doctors transparent insights into each diagnosis.
“Trust is everything in medicine. If an AI can show why it made a decision, clinicians are more likely to adopt it,” Oyekanmi said.
Though based in the United States, Oyekanmi has remained deeply connected to Nigerian research networks, collaborating with physicist Nelson Abimbola Azeez of the University of Abuja. Together, they aimed to apply AI expertise to diagnostic challenges across Africa, from brain tumors to pneumonia detection. “This is not just about publishing papers,” Oyekanmi emphasised. “It’s about creating practical tools that improve patient outcomes and build local capacity in medical AI.”
His earlier collaboration with Adigun and Adeniyi on AI-based X-ray interpretation for pneumonia detection paved the way for this larger brain cancer project, underscoring a steady commitment to using AI for social good.
Earlier this year, Oyekanmi received the 2025 NIPES Award for Outstanding Contribution to Research and Innovation, presented by the National Institute of Professional Engineers and Scientists (NIPES). Selected from over 1,200 nominations across four countries, his work was recognised for its originality, measurable impact, and alignment with global research standards. Announced ahead of the 2025 NIPES International Conference at the University of Benin, the recognition further cemented Oyekanmi’s reputation as one of Nigeria’s most promising voices in AI-driven scientific research.
“Recognition from NIPES means a lot to me. It reminds me that impactful research isn’t just about algorithms, it’s about improving lives,” he said.
Experts have applauded Oyekanmi’s work for combining academic rigour with real-world relevance. The study compares LightBT-CNN’s performance against global benchmarks like ResNet and EfficientNet, showing near-parity in accuracy but with far lower computational demands, a crucial advantage for healthcare systems with limited digital infrastructure.
“AI doesn’t have to be complicated to be effective. Sometimes simplicity and efficiency matter more than brute-force computation.”, Oyekanmi added.
Despite the success, he acknowledged limitations, noting that the dataset represents controlled MRI conditions rather than the variability of real-world hospital settings.
Looking ahead, Oyekanmi plans to work with clinical partners to validate the system on real patient data and explore multi-modal imaging that integrates MRI with CT and PET scans.
He is also advocating for cross-institutional AI training programs in Nigerian universities to equip the next generation of scientists with hands-on experience in medical machine learning.
“This work reminds us that innovation isn’t confined to big tech companies. With the right vision, collaboration, and compassion, AI can become a tool for equity, helping every patient, everywhere, get the care they deserve,” he reflected.



