top of page
  • Writer's picturePooja Vijaykumar

AI, an MCP? Tackling the MachineLearning Chauvinist Pig

In 2018, news about Amazon’s AI recruiting-tool being sexist was received with many gasps and hand-on-the-mouth reactions but there was a specific demographic that only sighed in disappointment. It is no surprise that in the male-dominated IT industry, women have had to work twice as hard to make their voices heard and efforts visible. But having the same old narrative perpetuated time and again, and by a machine nonetheless, was not a very reassuring sign. According to Reuters, the tool had trained itself to think that male candidates were preferable. The tool would end up penalizing CVs for words like “woman” and would bring down the score if the term was mentioned anywhere in the CV. Essentially, you were being penalized for being a woman. Granted, the tool was trained on 10 years of applicants’ data, most of which came from men. Ultimately, it was scrapped.


But this was in 2018! Everyone is so much more aware now, about the world and technology, but are they aware of what data they’re feeding their algorithms with? That’s the point. Data. The fuel from which algorithms develop their behavior and predicting capabilities.


Misogyny is present everywhere. It is not only in the hiring process, we see it presented all over social media. We see it covered in subtleties in various articles and multiple blog posts. As people who work in tech, we need to prioritize creating safe spaces online and in the industry.


Let’s take another example where inherent misogyny establishes itself in the tech space. The Lensa AI art app has become widely popular on social media platforms. The app creates portraits based on image prompts. Everyone was on board - a new and modern renaissance with the ultimate medium of art - technology. Soon enough, users began to notice the difference in the portraits when women used their selfies or provided prompts that were mostly women-related. The AI-rendered photos of women subjects were extremely sexualized to the point of it being alarming.


It is incredulous to observe the inherent misogyny in AI systems, considering that the very first computer algorithm was written, in the 1800s, by a woman (Ada Lovelace. An English mathematician and daughter of Lord Byron, she wrote an algorithm to calculate Bernoulli numbers which was specifically intended to be executed by a machine)


How do you de-bias an algorithm? How do we sit an algorithm down and say “BAD”?


You start with the people that created them. We are in the transcendental era of cognitive services using AI and advanced deep neural networks. It's high time people stop blaming the algorithm when clearly, they’re the ones who handle the fodder. We have the technologies and architecture to actually sit and observe the data and what the machine does with it. We have the tools to detect a single instance of unfairness and gender-based inequalities. The pointed finger needs to turn around anytime now.


But with the amount of data that we deal with, is it humanly possible to scourge through them error-free? Yes, we could build algorithms that automate these tasks for us but then again, an estimated 2.5 quintillion bytes of data is generated every single day. Now how’s that for a Herculean task!?


So, if it is difficult to tackle the problem at the data source/generation level, where is the next best place to set our sights on?


Natural Language Processing techniques have played an important role in tackling misogyny in texts. NLP-based Deep Learning techniques have been used for research and providing solutions in extracting and classifying various misogynistic sentiments in text-based documents.


Take for example, a paper detailing the use of multiple state-of-the-art NLP classifiers to distinguish various topics of both misogyny and sarcasm in Arabic tweets (link). Muaad et al details a comprehensive AI-based study to automatically detect misogyny and sarcasm in Arabic texts using binary and multiclass scenarios. The binary classes were divided into “misogyny” and “non-misogyny” while the multiclass levels included a granular observation of misogynistic behaviors such as stereotyping, dominance, sexual harassment, discrediting, threat of violence, etc., Here’s another paper by Abburi et al (link) that details the use of a semi-supervised neural network to tackle fine-grained multi-class sexism.


The point of the above examples is to show how we can leverage advanced technology to work in our favor. Everyone knows that even the vilest of beasts can be tamed if you know how to work out its kinks and patterns. Gender bias is pervasive and it does perpetuate long term effects on women, both psychologically and economically. Opportunities go wasted and talent goes into hiding.


Okay, so what’s the first step? What are the processes that professionals, analysts, engineers and developers need to keep in mind? Well, we need to think very carefully about how to build gender-smart AI systems that help advance fairness and equity while tackling any form of bias from being embedded into the systems and then being scaled by the systems. For starters, have more women in the team. Certain patterns in the data and machine behavior which may seem normal could end up being the deciding factor for its unfairness and there are high chances that women researchers could pick these out in an instant.


What do we do about the data? The data that we work with is a snapshot of the real world. It reflects every instance of prejudice and stereotype that people face in every aspect of life. Highly skewed data is not uncommon but we can still decide what to collect. We have a say in what needs to be represented and what is apparent regressive representation that needs to be shelved. We need to inculcate data that vocalizes the experiences of groups that aren’t solely centered around the dominant gender.


Another thing that can be a major nudge in the right direction is to incorporate gender expertise when working on data and AI algorithms. Literacy and knowledge about gender inequality can go a long way, for anyone from managers to developers.


Furthermore, now that a binary representation of gender is becoming obsolete, it is critical to tackle the nuances of gender-conforming, or rather stereotype-perpetuating nature of AI systems. Ignoring the implications can have long-term consequences on the well-being of women and non-binary folks.


Technology is revolution, and how can you have a revolution when a big part of the community is silenced?



コメント


Thanks for subscribing!

bottom of page