Software and technology companies use artificial intelligence (AI) to make processes and applications easier for everyone involved. You can open your phone just by looking into the camera. You can call out to a device to give you the weather report. It can redirect you when traffic is getting too heavy in a certain area. You can resolve customer service issues without waiting on hold. In the best examples, AI cuts down employees’ workloads and improves the user experience by limiting inefficient human interactions. But there is a flipside to reducing the human element: unintentional bias.
All AI is built from data — whether loaded into it or gathered by it over time. Where, when and how that data is digested by the AI is written into the machine learning algorithms by computer programmers. They take ideas for programs, processes and apps and figure out what information is needed to make that idea come to fruition. So to limit human-to-human interactions on the back end, they have to be heavily involved on the front end, and that is where unconscious bias comes in.
In all that we do, we are limited by our knowledge and background in how we frame things. That is not to say efforts are not made to include other perspectives, but that, too, is dependent on how wide the net is cast in gathering those additional viewpoints.
For example, facial recognition continues to increase in popularity. As mentioned earlier, it’s used to open phones, but it’s also being used in such ways as allowing entry into secure locations and in uncannily identifying people on social media. While that may raise some privacy concerns, it is really in how law enforcement is using it that is prompting even more serious questions.
Facial recognition can yield incredible results when, say, trying to track someone’s movements through security camera footage. Those results can also be incredibly flawed. The problem is the programs that are being employed to perform facial recognition services are limited to the data they are being fed. One study looked at the datasets being supplied by a leading company in the field to organizations that were building facial recognition software. The analysis found that in that widely distributed dataset only 40% of the images were of women and only 1% were images of people over the age of 60. It was found to have an overrepresentation of men ages 18 to 40, but an underrepresentation of people with darker skin.
A 2019 federal study by the National Institute of Standards and Technology (NIST) found similar issues among the majority of face recognition algorithms. The study examined how 189 software algorithms from 99 developers executed the two most popular applications of facial recognition on more than 18 million images of about 8.5 million people. Overwhelmingly, U.S.-developed algorithms resulted in higher rates of false positives for women over men; for Asians, African Americans and Indigenous groups, with respect to race; and for the elderly and children, with respect to age.

