Friday, April 26, 2024
Home Women Business News When There Is Bias In AI

When There Is Bias In AI


Artificial Intelligence is increasingly underlying the systems and processes that we interact with on a daily basis. While this has benefits such as improved efficiency, increased capacity, and the ability to institute more sophisticated applications, it can also be a double-edged sword. Code is unbiased, but the people who create it are not, and their unconscious biases can inform that code and everything that it interacts with.

This is true of every product. We’ve seen clear and catastrophic examples with things like airbags, which were designed by mostly male teams resulting in a product that makes women 17% more likely to be killed in an auto collision and 73% more likely to be injured. In the tech industry, we’ve seen this in facial recognition software, with scanners being more accurate when identifying male faces and pale skin.

When it comes to AI this is even more dangerous because it is not just a single, biased algorithm that’s being instituted. It is a program that iteratively creates itself. It evolves based on the information being put into it. If there is unconscious prejudice in the initial data, or in the way it is designed, then that can negatively affect systems indefinitely down the line.

It is also an opportunity as technologies continue to emerge, and become more powerful and influential in our daily lives, to make better less biased products and algorithms that can help erode cultural and systemic bias that exists within our culture.

When we have systems operating based on biased AI, people can get hurt, excluded, and passed over for opportunities. There are algorithms that determine what your credit score is, or whether you can have access to a loan. They can be used to predict a released prisoner’s recidivism, affecting their parole, or which neighborhoods are most likely to be high crime areas, causing certain people to be unjustly targeted by the police.

There are also more subtle, but far-reaching consequences for biased AI. Information that is provided to us, advertisements, offers, and opportunities are all informed by systems that may be developed with biased information or processes. This can have a profound effect on the social psyche of vast groups of people.

Candice Morgan, Equity, Diversity, and Inclusion Partner at GV, spoke about an experience she had while working at a previous company.  “We would get notes from users, and there was a very poignant note we got from one user who wrote about how all of the default images she was seeing when she did a search didn’t look like her. That was when I became very aware of the power of the algorithms and how exclusionary they could be.” Recognizing and being aware of the bias empowered them to make changes to the inputs that were creating biased results.

Unfortunately, this is a one-way system. Developers get to determine the parameters of an AI, and program in what they think is important. When decisions are made based on this, those affected have no method of appeal. The public has no idea why a decision about their credit, or their loan status was made the way it was, and there is no way for them to respond to it. You are just supposed to accept that the computer is right.

We’ve already seen that the computer isn’t always right. In a study by the University of Washington, Google image searches for the term CEO brought up pictures of women only 11% of the time, when women represent 27% of CEO’s in the United States. Amazon’s hiring algorithm has been shown to be biased against women. Problems with predictive software have resulted in millions of Black people’s health care being negatively affected. Chat bots set up to learn from the communities they interact with have routinely been shown to post inappropriate comments. The list goes on and on.

Solving this issue will take conscientious effort. It’s important to have a framework in place that is informed not just by data, but by user feedback, as well as a team with diverse backgrounds that will better reflect the entire user base. Trying to understand the journey of different people who will be interacting with systems and technologies is a solid base for ensuring that a company is not just producing a product, they are producing a product for everyone.

Twilio Growth Solutions Engineering Manager Richard Bakare also advises organizations to be critical and disruptive of themselves. “Embrace the challenger mindset. It’s hard, especially if you’re new, but you’ve got to take on that mindset. That doesn’t mean playing devil’s advocate or being combative, but asking questions. What are our outcomes here, why are we doing this, what is the transparency in the model, what are the feedback mechanisms? And if you can, break it, and show examples.”

Bias in AI is a problem that will only get iteratively worse over time. As programs learn more the prejudices of their creators will continue to inform them and future systems. That’s why we have to take action now. The teams and leadership working on this need to be more diverse, better standards need to be implemented, and the people at the top of tech companies need to start making more intentional decisions to ensure a future that is inclusive and fair for everyone.



Source link

- Advertisement -

Must Read

Related News

- Supported by -