Detecting and mitigating bias in machine learning models
| dc.contributor.author | Mulekar, Ashwini | |
| dc.contributor.author | Salati, Abid | |
| dc.contributor.author | Tankala, Divya Kumari | |
| dc.contributor.author | Kumar, Gotte Ranjith | |
| dc.contributor.author | Penubaka, Kiran Kumar Reddy | |
| dc.contributor.author | Ali, Guma | |
| dc.date.accessioned | 2026-05-07T12:17:54Z | |
| dc.date.available | 2026-05-07T12:17:54Z | |
| dc.date.issued | 2026-03-03 | |
| dc.description | The study contributes to SDG 2 (Zero Hunger), SDG 1 (No Poverty), and SDG 9 (Innovation and Infrastructure) by improving agricultural productivity through better integration of indigenous and scientific knowledge, enhancing extension service delivery for smallholder farmers in Uganda. It supports Uganda’s NDP IV by strengthening agricultural transformation, knowledge systems, and rural livelihoods for inclusive growth and resilience. | |
| dc.description.abstract | The risks of bias in machine learning models are considerable, particularly in sensitive fields such as health care, employment, and finance. Unfair outcomes from these ML/AI systems can exacerbate existing social inequalities. As the goal is to find potential ways to develop a holistic fairness-aware AI framework that can detect, mitigate, and monitor algorithmic bias throughout the ML pipeline, this work proposed a method that integrates causal inference, adversarial debiasing, human-in-the-loop processes to generate feedback, federated learning, and real-time detection of bias drift - all of which can reinforce fairness while minimising the impact on performance. Importantly, the study's experimental results also demonstrated a reduction of similar bias metrics: Disparate Impact decreased by 31%, Equalized Odds Difference decreased by 36%, while delivering an F1-score of your expected 89.1%. To reiterate, this work demonstrated the framework's capacity to create equitable outcomes, with relatively minimal performance sacrifice, across many ML models. The study illustrated that ethical and regulatory issues need to be embedded into the deployment of an AI system and provided a scalable, privacy-preserving framework for organizations to use to build more trustworthy, transparent, and socially responsible ML systems. | |
| dc.identifier.citation | Mulekar, A., Salati, A., Tankala, D. K., Kumar, G. R., Penubaka, K. K. R., & Ali, G. (2025, November). Detecting and Mitigating Bias in Machine Learning Models. In 2025 3rd International Conference on Computational Intelligence and Network Systems (CINS) (pp. 1-6). IEEE. | |
| dc.identifier.uri | https://dir.muni.ac.ug/handle/20.500.12260/974 | |
| dc.language.iso | en | |
| dc.publisher | IEEE | |
| dc.subject | Measurement | |
| dc.subject | Training | |
| dc.subject | Ethics | |
| dc.subject | Machine learning algorithms | |
| dc.subject | Federated learning | |
| dc.subject | Finance | |
| dc.subject | Medical services | |
| dc.subject | Human in the loop | |
| dc.subject | Inference algorithms | |
| dc.subject | Artificial intelligence | |
| dc.title | Detecting and mitigating bias in machine learning models | |
| dc.type | Other |