A familiar voice booms in the background in a commercial extolling the virtues of artificial intelligence (A.I.) technology. The voice’s slight accent and cadence suggest, without even looking up that it’s the voice of a Black man. “It’s possibility. It’s adaptability. It’s capability.” The voice spitting the spoken word on the value of technology and Microsoft A.I. belongs to Common.
“It’s not about what technology can do, it’s about what you can do with it… So, here’s the question. What will you do with it?” is the call to action question Common leaves us with. The question isn’t what “we” will do with it, but what will “they” do with it.
The use of Black cool to sell a product is nothing new. With Nielsen validating the reach of Black digital footprints and influence, from content creation to popularizing certain music and fashion trends, accounting for $1.3 trillion in spending power, Black culture sells. Retail outlets have used Black celebrities and Black pop culture to make products palatable to crossover audiences for decades. Yet, using Black bona fides to sell us on technology is different – because never have the stakes been higher. This is especially true of A.I.
Artificial Intelligence is basically opinions baked into code. It has evolved into its current form with machines imitating human intelligence and enabling machines to perform tasks typically requiring a human’s experience and intellect. Pattern recognition, including visual perception – how do you know what you’re seeing is what you’re seeing, speech recognition – how do you know what you’re hearing is what you’re hearing, data recognition – how are you validating the information you receive, decision-making – which includes creditworthiness to home purchases, and translation between languages – does this phrase mean what we think it means. Now, imagine if this code, not created for you or by you, controlled what jobs you are screened for, what products are advertised to you, influences your interactions with law enforcement, and even determines your health care decisions… because it does.
The percentage of Blacks in tech, across all tech companies, is just under 3%. So, while you have celebrity investors like Snoop Dog, Nas, Jay Z, and Kevin Durant, and tech campaigns featuring Kerry Washington, Mary J. Blige, and Taraji P. Henson enjoying Apple Music products, and Common for Microsoft A.I., how many Black people are creating and testing these products before release? The problem is that most developers fit a certain demographic, and so do the data sets that they use to determine everything from your credit score to what ads to show you while you scroll social media. Developers creating and testing their products usually rely on data sets that reflect them and their lived experience, which skews more than 74% male and 83% white, research shows. So, when testing algorithms on these databases with high numbers of people like themselves, developers think they work well.
Meanwhile, the A.I. utilized by the government, but often made in the private sector, suffers from the same problem, but is literally designed to find create a deadlier military, and more efficient and weaponized law enforcement departments that find you with facial recognition, arrest you using predictive policing, sentence you via predictive sentencing technology, and possibly keep you in jail due to the First Step Act’s implementation of A.I.
The irony here of using Common to sell A.I., is how many Black employees, and Black women employees, in particular, are helping to build this technology? The answer to that question is especially relevant when you consider how these private tech companies are complicit in building military-grade technology that is weaponized against people of color domestically and abroad. While using Common to praise Microsoft A.I. technology and its many uses, we can’t forget that this same company had employees signing a letter to cancel its contracts with Immigration and Customs Enforcement – better known as ICE, the agency that not only threatened to deport 21 Savage, but has separated countless mothers and children at border detention facilities under this administration – as they don’t want to be complicit in creating facial recognition technology that can be used for nefarious purposes.
But Microsoft isn’t alone. Google and Amazon employees petitioned their companies to abandon their bids for a $10B Department of Defense contract to provide classified data via cloud computing services to provide soldiers in the field information in real time to assist in battle. “This program is truly about increasing the lethality of our department and providing the best resources to our men and women in uniform,” John Gibson, chief management officer at the Defense Department, said about the goals of the program. Considering about a third of active duty soldiers are minorities, with 17% identifying as Black, though not accounting for Afro-Latinos, it is understandable that some in tech do not want to be complicit in making tools for marginalized groups optimize to kill and hurt other marginalized people.
Domestically, predictive policing software, like Hunchlab, use historical crime data, moon phases, location, census data, and even professional sports team calendars to predict crime occurrences and quantity of officers to police an area. However, the basis for the results are historical crime data that disproportionately target Black and brown people, primarily in low-income areas. The Justice Department findings in Ferguson, Missouri demonstrate this. If historical data shows most crime and arrests are in poor minority areas, predictive technology will reinforce that notion and exacerbate a cycle of over-policing in poor minority areas.
That’s not all. Racial bias permeates the sentencing process, as courts around the country implement risk-assessment A.I. code to sentence criminals.
Perhaps it is time for the tech industry to adopt, in part, the Hippocratic oath of, “First, do no harm.” Until there is demonstrable evidence that products are inclusive, cannot be weaponized for immoral purposes, to criminalize the innocent, or to subvert democracy, it is hard to advise marginalized communities to not view A.I. innovation with cautious optimism.