Don’t use ChatGPT to write your job adverts. Study on Its Biasness

ChatGPT has found itself in the eye of a media storm recently. With stories seeing the tool being used from anything to crafting airline complaint emails to business presentations, and writing books, it seems that the uses are endless for the AI-based platform’s ability to simplify some of the most mundane tasks. 

For time-pressed recruitment teams, it’s always going to be really tempting to be able to snap our fingers and outsource some of the manual labour that comes with hiring new talent by leveraging AI. With tech that helps us track applications, anonymise CVs, and analysing candidate interviews, AI already plays a strong role in helping organisations optimise their recruitment processes. So why couldn’t it help shoulder some of the more human-dependent burdens, too — like writing job adverts for example? 

With ChatGPT’s founders recently admitting that the tool is biased and offensive, we wanted to find out just how biased ChatGPT would be when it came to writing a job advert. So, our research team put ChatGPT to the test against some job adverts written by humans. 

Now, the question comes, is Chat GPT biased?

Simply put, yes, Chat GPT is biased. And we found out that ChatGPT is more biased than we first thought — a lot more. Here’s why you should think twice about using ChatGPT to write your job adverts.

ChatGPT bias

Job ads written by ChatGPT are almost twice as biased overall as those written by humans

We randomly selected over 7000 publicly available job adverts across 15 different industries, and then asked ChatGPT to generate an advert for each of the over 7000 positions. Then, we evaluated all of the adverts for bias using the Develop Diverse platform across a number of different categories including gender, neurodiversity, age, physical disability, and race and ethnicity. This gave us an inclusivity score. 

Our analysis revealed that job adverts written by ChatGPT are on average 40% more biased than those written by humans. 

“When we approached this research, our key question was: Can you tell the difference between job adverts written by ChatGPT and humans from an inclusion perspective?” Jihane Kamil, our Head of Data Science explains. 

“We randomly selected job adverts written by humans for different types of roles, some of which are stereotypically associated with specific societal groups, such as software engineers and nurses. Our one-to-one analysis compared with job adverts generated by AI showed that ChatGPT was far more likely to choose biased words and expressions for the same role than a human would. In short, ChatGPT reinforces existing biases and stereotypes.”

Women face 40% more bias from a ChatGPT-written job ad compared to one written by a human

Looking deeper at our results, we found some very familiar patterns emerging when it comes to gender. 

When we analyse language in the Develop Diverse platform, we scan for gender bias by looking for agentic language (also known as masculine-coded language), and communal language (also known as feminine-coded language). 

After assessing the language used in ChatGPT-generated job adverts for gender bias, we found that they were 9% more likely to show bias against males than a human-written job ad. But ChatGPT was 41% more likely to write an ad that was biased against females than a human. 

Research consistently shows that using agentic language in job adverts sustains gender inequality, and in some cases, job ads with communal language had lower salaries. But it’s not just about the gender binary — higher levels of agentic language have a negative impact on everyone, including cisgender men. 

Our findings show that AI-generated job ads are far more likely to discourage people from all identity groups from applying to a role, even the dominant majority. 

“It’s a simple fact that today’s world is still better designed for men,” Jihane explains. “Society discriminates less against males than females and non-binary people — and the language we speak as a society is accordingly more targeted towards men than other gender groups. ChatGPT uses data from the internet framed by this societal lens, so it naturally perpetuates existing societal biases. The data we put in to train ChatGPT, which is more fitting to men, dictates the results.”

ChatGPT-generated job adverts are least inclusive towards neurodivergent people

When we moved away from the gender binary, we found that ChatGPT’s linguistic choices also excluded multiple other identity groups and dimensions of diversity, hitting neurodivergent people worst of all. This is a great barrier towards creating a neurodivergent workplace

Job adverts generated by ChatGPT were 42% less inclusive towards people from disadvantaged ethnic backgrounds, and 41% less inclusive towards neurodivergent candidates, which includes those with ADHD, OCD, dyslexia, or on the autistic spectrum. 

These results track with our findings in our Recruiting for Belonging white paper, where we identified that language used in human-written job adverts across the UK and Nordics countries was most likely to exclude neurodivergent people, especially at higher levels of seniority. 

Research shows that a biased job advert not only influences who applies to the role, but the resulting selection process too: 

 
 
  • Another 2022 study found that candidates with dyslexia found pre-employment tasks non-inclusive and found it difficult to add for their needs, leading to a perception that they would not be evaluated fairly. 
 

Similarly, we also found out that ChatGPT-generated job ads were 41% more biased towards people with physical disabilities compared to ads written by humans. 

“When we broke down our analysis into different diversity groups, we found a consistent pattern: ChatGPT performs worse on disadvantaged groups,” Jihane explains. “So if you belong to a marginalised group, then the language created by ChatGPT will be a lot more biased against you than the language a human would choose. 

“Unfortunately, neurodivergent people and people with physical disabilities are among the diversity groups that are least well researched from a psycho-linguistics perspective,” she adds. “This is why we consistently see lower levels of inclusion for these groups in job adverts.”

A research-based approach to eliminating recruitment bias

Biased data going in will always lead to biased results coming out. And as busy talent teams increasingly look to AI to help scale their recruitment efforts, Jihane cautions against using AI in any decision-making capacity. 

“Since the introduction of AI, we’ve been able to automate processes that are time-consuming and often complicated for humans,” she explains. “In a recruiting context for example, we’re increasingly seeing the emergence of tech that helps organisations review interview transcripts, or better their talent processes by automating interview scheduling. 

“In this context, AI does the job well — it doesn’t need a human, because it looks at that information as data and identifies patterns, because it is processing data to inform without making critical assumptions that could discriminate against certain individuals. But where we need to be careful is when using AI for any kind of decision-making process. You can’t reduce a complex human problem like who to hire into maths and numbers — that doesn’t make sense. 

“AI is a tool that you have to train with the right data and expertise. You have to question every outcome it gives you. You have to use it to guide and assist you, to give you the information — but ultimately, it’s the humans that play the ultimate role in using that information to make a decision.” 

Long story short, trusting AI blindly in generating job ads doesn’t quite fall under the best practice of writing inclusive job ads

At Develop Diverse, we use data from AI as a starting point. Our team of experts in linguistics, DEIB, sociology, socio-politics and socio-psychology spend months, not minutes, analysing and researching AI-generated data to build a huge dataset of biased words. 

This means that every word we highlight in our platform as being biased or discriminatory is based on science, not biased data — meaning our customers including Maersk Tankers, Dyson, and Danske Bank can confidently leverage our platform to eliminate bias, long-term. Find out more about how we can help you eliminate bias in your recruitment efforts by booking a demo.

Sign up for our newsletter

We keep you updated with news and helpful tips about how to create an inclusive culture. 

Share:

More Posts

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits.   Learn more