This project investigates the impact of data bias on the fairness and reliability of unsupervised machine learning algorithms. While fairness has been extensively studied in supervised contexts, unsupervised models, such as clustering and dimensionality reduction, remain vulnerable to latent biases in input data that can propagate through analyses and decision-making pipelines. The research aims to develop theoretical frameworks and practical methodologies for detecting, quantifying, and mitigating bias in unsupervised learning, focusing on promoting equitable outcomes across diverse applications, including healthcare, education, and social data analytics.
Keywords:
Fair AI, Data Bias, Unsupervised Machine Learning
Electrical and Computer Engineering
Successful candidate must:
How to apply:
To apply, please email [email protected] the following:
The opportunity ID for this research opportunity is 3655