This research project aims to explore and develop prompt engineering techniques to improve fairness (i.e., mitigate biases) in large language models.
Research Areas:
Fair AI; Large Language Model
Although the world is witnessing an unprecedented application of Large Language Model (LLM)- based generative AI systems across various domains, these systems often produce biased or unfair outcomes. Adequate and appropriate prompt engineering is crucial for improving fairness or reducing biases in LLMs. Such a practical approach to LLMs involves understanding the nuances of language and context to create prompts that encourage more equitable responses across different demographic groups. By carefully designing and refining the input prompts used to guide these models, it is possible to mitigate biased outputs that often reflect societal prejudices. This project aims to develop prompt engineering techniques to improve fairness (i.e., reduce biases) in large language models.
Successful candidate must:
How to apply:
To apply, please email [email protected] the following:
The opportunity ID for this research opportunity is 3570