Detecting and mitigating bias in machine learning models

dc.contributor.authorMulekar, Ashwini
dc.contributor.authorSalati, Abid
dc.contributor.authorTankala, Divya Kumari
dc.contributor.authorKumar, Gotte Ranjith
dc.contributor.authorPenubaka, Kiran Kumar Reddy
dc.contributor.authorAli, Guma
dc.date.accessioned2026-05-07T12:17:54Z
dc.date.available2026-05-07T12:17:54Z
dc.date.issued2026-03-03
dc.descriptionThe study contributes to SDG 2 (Zero Hunger), SDG 1 (No Poverty), and SDG 9 (Innovation and Infrastructure) by improving agricultural productivity through better integration of indigenous and scientific knowledge, enhancing extension service delivery for smallholder farmers in Uganda. It supports Uganda’s NDP IV by strengthening agricultural transformation, knowledge systems, and rural livelihoods for inclusive growth and resilience.
dc.description.abstractThe risks of bias in machine learning models are considerable, particularly in sensitive fields such as health care, employment, and finance. Unfair outcomes from these ML/AI systems can exacerbate existing social inequalities. As the goal is to find potential ways to develop a holistic fairness-aware AI framework that can detect, mitigate, and monitor algorithmic bias throughout the ML pipeline, this work proposed a method that integrates causal inference, adversarial debiasing, human-in-the-loop processes to generate feedback, federated learning, and real-time detection of bias drift - all of which can reinforce fairness while minimising the impact on performance. Importantly, the study's experimental results also demonstrated a reduction of similar bias metrics: Disparate Impact decreased by 31%, Equalized Odds Difference decreased by 36%, while delivering an F1-score of your expected 89.1%. To reiterate, this work demonstrated the framework's capacity to create equitable outcomes, with relatively minimal performance sacrifice, across many ML models. The study illustrated that ethical and regulatory issues need to be embedded into the deployment of an AI system and provided a scalable, privacy-preserving framework for organizations to use to build more trustworthy, transparent, and socially responsible ML systems.
dc.identifier.citationMulekar, A., Salati, A., Tankala, D. K., Kumar, G. R., Penubaka, K. K. R., & Ali, G. (2025, November). Detecting and Mitigating Bias in Machine Learning Models. In 2025 3rd International Conference on Computational Intelligence and Network Systems (CINS) (pp. 1-6). IEEE.
dc.identifier.urihttps://dir.muni.ac.ug/handle/20.500.12260/974
dc.language.isoen
dc.publisherIEEE
dc.subjectMeasurement
dc.subjectTraining
dc.subjectEthics
dc.subjectMachine learning algorithms
dc.subjectFederated learning
dc.subjectFinance
dc.subjectMedical services
dc.subjectHuman in the loop
dc.subjectInference algorithms
dc.subjectArtificial intelligence
dc.titleDetecting and mitigating bias in machine learning models
dc.typeOther

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Guma_2026_Conf_03032026.pdf
Size:
3.71 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
4.17 KB
Format:
Item-specific license agreed upon to submission
Description: