A few days ago, my eye fell on the magazine stand while in the checkout line. Between cooking magazines and tabloids, a cover popped up titled something like “Artificial Intelligence: what it is and how it will change the world”. It made me smile to think about how, in a few years, a topic relegated to Black Mirror episodes has arrived on supermarket shelves.

It’s undeniable that back then AI was already an integral part of our everyday life. But, thanks to tools like ChatGPT, Replika, and DALL-E, its role has recently become less and less subtle. Finally, the interest and concern of the general public have risen. The release of these technologies for collective use should go hand in hand with a critical mindset development on artificial intelligence’s ethics.

At SPEAK, we have decided to celebrate this year’s International Women’s Day by investigating the impact of AI on gender equality. We will explore both the flaws that AI has demonstrated in recent years and the future perspectives for improvement. We must ensure that AI systems, which will become an essential part of our daily lives, benefit everyone equally.

Okay, But… What Is AI?

white robot human features
Photo by Alex Knight on Unsplash

If your mind immediately goes to Blade Runner when you hear the words Artificial Intelligence, here’s a brief review of what it actually is. There are many definitions of AI, but according to Science Direct, it is the general ability “of computer systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.

While the term machine learning, which you might find while reading news on AI, refers to the technologies and algorithms that enable your computer to imitate human intelligent behaviors. It allows an AI model to identify patterns, make decisions, and improve itself through experience and data.  

The Potential Threats Of AI To Gender Equality

As AI models with different purposes keep coming out, their limitations emerge, representing potential threats to minorities and women’s rights. But how? 

As mentioned before, AI systems are based on data and algorithms. For this reason, they are as unbiased as the data they train on. In fact, in the data-cleaning process, where professionals correct errors and inconsistencies, many factors can lead to mistakes or bias. 

When the data sets used to train AI systems are biased, it can lead to bias in AI systems. As a result, gender-based discrimination or existing gender stereotypes perpetuate. 

Here are some examples of real-life consequences that biased data can have on women’s lives.

  • A recent investigation by The Guardian journalists Gianluca Mauro and Hilke Schellman uncovered how biased AI tools of large tech companies, like Google and Microsoft discriminate against women’s bodies. These tools should help detect and suppress sexually explicit or suggestive content on many platforms, like Instagram. They analyzed hundreds of pictures of women’s and men’s bodies with these tools. Photos portraying women in situations like practicing yoga, wearing a swimsuit at the beach, or undertaking medical exams were reported way more often than men. This means that sharing this kind of content gives the users a higher risk of getting shadowbanned and therefore losing audience and potential earnings. The probable cause identified is a lack of diversity in the people hired to label the data for the algorithms’ training. 
  • Another case where activists raised an eyebrow seeing the perception of the female image by AI tools regards portrait apps. These applications were born to transform a user’s photo into AI-generated portraits. People started noticing how they tended to sexualize and make women’s bodies conform to unrealistic standards of beauty. In this case, this bias is probably due to the app’s algorithm, which may have been trained on data that reflects and reinforces sexist stereotypes. 

The Impact Of AI On The Transgender Community

Also, it’s important to mention the massive impact that AI can have on the transgender community. Considering the previous examples it’s not hard to imagine how biased algorithms can affect these categories. However, the recognition technology seems to be the riskiest one right now.

A sub-category of facial recognition called Automatic Gender Recognition (AGR) can identify a person’s gender from their picture. Institutions generally use facial recognition for identity verification. A study by the University of Colorado Boulder demonstrated how inaccurate these models are when it comes to identifying trans people’s gender. This happens because the labels AGR systems use are based on a binary understanding of gender.

Right now, the use of these models is escalating among law enforcement and the private sector. These inaccuracies can easily lead to a range of consequences that goes from public humiliation to civil liberties violations.

Positive Examples Of AI Initiatives

Authentication by facial recognition concept. Biometric. Security system.
Photo by Shutterstock

However, highlighting its critical issues does not mean denying the incredible potential that a powerful tool such as AI has in the fight to promote gender equality. Here are some examples of companies that already tried to use AI tools to help remove gender-based discrimination. 

  • GapJumpers is a platform that uses AI to help eliminate unconscious bias in the hiring process. They offer two services: an analysis of the current hiring process to find data-driven answers to questions like: “Is bias the key bottleneck to hiring/promoting more diversity in our organization? If yes, at what stage?”. After this initial step, the platform uses blind hiring. This means evaluating applicants for their skills rather than their personal characteristics.
  • Watson Candidate Assistant” is an AI-powered tool by IBM that helps recruiters reduce bias in the hiring process. It interacts with job candidates through chatbots, answers their questions about job vacancies, and it guides them through the application process. It can also provide feedback to candidates on their qualifications and help recruiters identify top candidates. The tool provides recommendations for job descriptions, screening questions, and interview questions to help ensure that the hiring process is fair and unbiased.
  • Regarding health monitoring, Bloomlife designed a wearable device for pregnant women to monitor and track their contractions. The device uses artificial intelligence to analyze the data and provide insights to women and their doctors. It can help women identify false labor and make informed decisions about when to go to the hospital.
  • Another area where the gender gap is particularly impactful is financial literacy. Ellevest is a fintech company that tackles this issue by providing an online investment platform designed by women for women. The platform uses algorithms to create personalized investment portfolios for users based on their financial goals and risk tolerance. Its goal is to help close the gender investing gap. To do this Ellevest provides educational resources and investing strategies tailored to women’s unique financial situations and needs. 

Scarce Data In A Diverse World

In the Internet Health Report 2022 this map shows a distorted version of the world according to the frequency of data usage by country.

Credit: Internet Health Report 2022

This means that the data sets used most frequently to train AI algorithms come from a few countries. The majority of the data sets come from 12 institutions and private companies based in the United States, Germany, and Hong Kong. This doesn’t mean that entire continents (Africa and Latin America) aren’t storing data or developing AI learning models. But it means they have less access to professionals, infrastructures to store data, and computing power. 

This phenomenon is called data scarcity and combined with a certain geographical source of data it constitutes one of the biggest problems of AI. The consequence is that AI models tend to reproduce often sexist and racist behavior largely because of the training data, which predominantly refer to white and Caucasian men. 

A possible solution to this issue might be generative AI. Generative AI can be used to create synthetic data, which is data that is artificially created by a machine learning model, like ChatGPT for example. This can be a solution to data scarcity because it allows for the creation of new data that can be used to train AI models when there is not enough real-world data available. For example, if there is a shortage of data on a particular group or demographic, generative AI can create synthetic data that is representative of that group. We can use this synthetic data to train AI models, overcome data scarcity, and enhance the performance and accuracy of AI models.

AI: A Potential Ally For Gender Equality

Experts in artificial intelligence ethics have long fought for greater transparency and supervision from tech companies. This includes not only diverse data sets but also professionals who develop these technologies. Promoting gender diversity in AI can create more job opportunities for women and help to close the gender gap in the tech industry. This can lead to better-designed AI systems that consider a wider range of user needs and ethical considerations.

AI can be a powerful ally in the battle for gender equality, but we must address the problems that have already emerged. When using AI tools, we must be aware that, right now, algorithms offer us a very specific view of the world.

Start Your Journey Today!

At SPEAK, we stand for inclusivity and equality on all levels. Would you like to be part of a community and learn a new language? Do you want to be a buddy and share your language with others? Join the journey at SPEAK!

Did you enjoy this article on the impact of AI on gender equality? Then read this one to discover the stories of four great women in science you should know.

Finally, creating awareness and initiating a conversation is the first crucial step in making a positive change. Feel free to express your opinions in the comments section and share this article with someone who might like it.

Author: Sveva Buttazzoni

Sveva Buttazzoni is a digital marketing specialist who has worked in both the cultural and nonprofit sectors. As an aspiring journalist, Sveva is particularly interested in exploring the intersections of media, technology, and civil rights.

Leave a Reply

Your email address will not be published. Required fields are marked *