Who hasn’t heard about the latest AI news featuring Amazon’s recruitment algorithm that had to be scrapped for sexist biases flying under the radar? I’m sure you have also read a lot of opinions floating around out there in response, from claiming that the tech only needs a bit of re-calibrating to completely dismissing it as having any sort of potential, or anywhere in between. It has certainly pulled the topic of AI and bias into the limelight. So what can be done about bias in AI?
Wait a second, where does this bias even come from?
Aren’t machines supposed to be neutral by virtue of not being human?
Of course, the problem is quite simple. AI doesn’t just spring into existence thinking on its own. Machine Learning is a fitting term associated for a reason. And just like a child requires material to learn things from, so does a machine. AI therefore inherently and automatically reflects implicit and unconscious bias present in society, including the way we use language. Of course, it does – the machine dutifully learned everything from us, from trends in the output of our society. It’s not the first time bias seeps into our technological advancements without people who drive that innovation noticing at all, much like how it happened with cameras and colour optimisation.
Can we benefit from the presence of bias to fight bias?
Bias and stereotypes are present, whether we like it or not. (And I hope we don’t like it.) But does this simple problem have a simple solution? In a way. Probably not the way you think, though. People are talking about de-biasing algorithms, but how are we supposed to do that when we ourselves are not always aware of the implicit bias lurking in our machine teaching material? The answer is to not focus on the highly problematic task of de-biasing algorithms but instead working with the bias to predict human response.
To sum it all up, awareness is the most potent tool at our disposal when addressing stereotypes and bias in society!
What does this have to do with Develop Diverse?
Adopting this approach enables us to represent how people are thinking about certain concepts and whether or not there’s a bias connected to them. In short, it helps us predict human associations. We’ve been using this knowledge to validate and expand our psycholinguistic framework according to which we classify the different sorts of bias present in our everyday language. This helps us make sure that we deliver the most accurate information on
In light of all this, we would like to invite you to try our tool for free for 14 days. Let’s keep working together to make communication more inclusive and welcoming to everyone from all walks of life!