Encoding and decoding gender: Investigating bias and language in artificial intelligence models

Presenter's Name(s)

Elizabeth Lembach

Abstract

As artificial intelligence (AI) models become deeply embedded in social systems, discussions on their ethical creation and application have intensified, particularly in regards to the consequences of biased models. This study examines how large language models (LLMs) such as ChatGPT-4.o encode, and potentially reinforce harmful social biases. Through a paired-question experiment, this research assesses (1) how gender is encoded in AI models such as GPT4.o, (2) how language influences gendered outputs, and (3) the extent to which AI-generated gender bias aligns with or diverges from human understanding.

Primary Faculty Mentor Name

Luis Duffaut

Status

Undergraduate

Student College

College of Engineering and Mathematical Sciences

Program/Major

Computer Science

Primary Research Category

Engineering and Math Science

Abstract only.

Share

COinS
 

Encoding and decoding gender: Investigating bias and language in artificial intelligence models

As artificial intelligence (AI) models become deeply embedded in social systems, discussions on their ethical creation and application have intensified, particularly in regards to the consequences of biased models. This study examines how large language models (LLMs) such as ChatGPT-4.o encode, and potentially reinforce harmful social biases. Through a paired-question experiment, this research assesses (1) how gender is encoded in AI models such as GPT4.o, (2) how language influences gendered outputs, and (3) the extent to which AI-generated gender bias aligns with or diverges from human understanding.