Developing a Large-Scale Language Model to Unveil and Alleviate Gender and Age Biases in Australian Job Ads

Mao, Ruochen and Tan, Liming and Moieni, Rezza and Lee, Nicole (2024) Developing a Large-Scale Language Model to Unveil and Alleviate Gender and Age Biases in Australian Job Ads. Open Journal of Social Sciences, 12 (06). pp. 109-136. ISSN 2327-5952

[thumbnail of jss2024126_61767944.pdf] Text
jss2024126_61767944.pdf - Published Version

Download (10MB)

Abstract

This study aims to explore the application of large-scale language models in detecting and reducing gender and age biases in job advertisements. To establish gender and age bias detectors, we trained and tested various large-scale language models, including RoBERTa, ALBERT, and GPT-2, and found that RoBERTa performed the best in detecting gender and age biases. Our analysis based on these models revealed significant male bias in job ads, particularly in the information and communication technology, manufacturing, transportation and logistics, and services industries. Similarly, research on age bias revealed a preference for younger applicants, with limited demand for older candidates in job ads. Furthermore, we explored the application of natural language generation using ChatGPT to mitigate gender bias in job advertisements. We generated two versions of job ads: one adhering to gender-neutral language principles and the other intentionally incorporating feminizing language. Through user research, we evaluated the effectiveness of these versions in attracting female candidates and reducing gender bias. The results demonstrated significant improvements in attracting female candidates and reducing gender bias for both versions. Overall, gender bias was reduced, and the appeal of job ads to female candidates was enhanced. The contributions of this study include an in-depth analysis of gender and age biases in job advertisements in Australia, the development of gender and age bias detectors utilizing large-scale language models, and the exploration of natural language generation methods based on ChatGPT to mitigate gender bias. By addressing these biases, we contribute to the creation of a more inclusive and equitable job market.

Item Type: Article
Subjects: Eurolib Press > Social Sciences and Humanities
Depositing User: Managing Editor
Date Deposited: 18 Jun 2024 09:37
Last Modified: 18 Jun 2024 09:37
URI: http://info.submit4journal.com/id/eprint/3668

Actions (login required)

View Item
View Item