Date of Completion

2025

Document Type

Honors College Thesis

Department

Computer Science

Thesis Type

Honors College

First Advisor

Lisa Dion

Second Advisor

Yuanyuan Feng

Third Advisor

Chris Danforth

Keywords

Artificial Intelligence, Gender, ChatGPT, Bias, LLM

Abstract

As artificial intelligence (AI) models have become increasingly embedded within social systems, there has been a rise in discussion surrounding the ethics of AI creation and application. This study explores how AI models, specifically large language models (LLMs) such as OpenAI’s GPT-4.o mini encode, and possibly progress, harmful social biases. Through analysis of quantitative and qualitative data generated through a paired prompt experiment, this study attempts to assess (1) how gender is encoded within LLMs such as GPT-4.o mini, (2) how language influences the gendering of output, and (3) the extent of how AI-generated models’ gender bias(or lack thereof) aligns with or diverges from a human understanding of gender. By creating a series of paired prompts with subtle gendered differences, this study aims to identify specific patterns regarding word correlations with gender and the relationship between human and AI gender biases. Additionally, by drawing on insights from previously published studies of gender, linguistics, AI development and ethics, this research can contribute to the growing discourse surrounding biased LLMs.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.

Share

COinS