OAKLAND, Calif. — Last month I spoke at a gathering of African-American technology professionals. I’m a transactional lawyer at a tech company and my husband is an engineer, so the industry is at the center of our lives. We have careers that allow us to help create products and tools our grandparents would never have thought were possible and to provide the kind of life for our family that they couldn’t have imagined. And it’s important to us to ensure that other people of color have a chance to contribute to the field and reap its benefits. With all those things on my mind, I left the conference energized and inspired by the ways in which tech is changing the world and the possibilities it holds for our community.
At the same time, I’m terrified for what these advances mean for my two young children. The same technology that’s the source of so much excitement in my career is being used in law enforcement in ways that could mean that in the coming years, my son, who is 7 now, is more likely to be profiled or arrested — or worse — for no reason other than his race and where we live.
Of course I’m not alone in feeling that technology is both a gift and a curse. This tension exists for anyone who enjoys the real-time conversations on Twitter but loathes the trolls, loves Facebook but abhors fake news, or depends on the convenience Alexa offers but frets about violations of privacy.
Yet in my life, because of the way artificial intelligence and machine learning are being increasingly used by law enforcement — the technology is seemingly growing up alongside my kids — it’s especially acute.
Unjust racial profiling and resulting racial disparities in the criminal justice system certainly don’t depend on artificial intelligence. But when you add it — as many law enforcement agencies across the country, including those in major cities like Miami, Los Angeles, Philadelphia, Atlanta and New York, have over the past couple of years — things get even scarier for black families.
This is especially frightening when combined with the fact that the current administration has already begun to reverse Obama-era criminal justice reform policies that were meant to make the system more just.
A.I. works by taking large volumes of information and distilling it down to simple concepts, categories and rules and then predicting future responses and outcomes. This is a function of the beliefs, assumptions and capabilitiesof the people who do the coding. A.I. learns by repetition and association, and all of that is based on the information we — humans who hold all the racial and often, specifically, anti-black biases of our society — feed it.
Just think of how Google’s facial recognition programs labeled black people in photos “gorillas.” Or how Microsoft’s Tay, a bot designed to engage in Twitter conversations, devolved into a racial-epithet-tweeting machine within 24 hours.
These downsides of A.I. are no secret. Despite this, state and local law enforcement agencies have begun to use predictive policing applications fueled by A.I. like HunchLab, which combines historical crime data, moon phases, location, census data and even professional sports team schedules to predict when and where crime will occur and even who’s likely to commit or be a victim of certain crimes.
The problem with historical crime data is that it’s based upon policing practices that already disproportionately home in on blacks, Latinos, and those who live in low-income areas.
If the police have discriminated in the past, predictive technology reinforces and perpetuates the problem, sending more officers after people who we know are already targeted and unfairly treated, given recent evidence like the Justice Department’s reports on Ferguson, Mo., and Baltimore, and the findings of the San Francisco Blue Ribbon Panel on Transparency Accountability and Fairness in Law Enforcement.
It’s no wonder criminologists have raised red flags about the self-fulfilling nature of using historical crime data.
This hits close to home. An October 2016 study by the Human Rights Data Analysis Group concluded that if the Oakland Police Department used its 2010 record of drug-crimes information as the basis of an algorithm to guide policing, the department “would have dispatched officers almost exclusively to lower-income, minority neighborhoods,” despite the fact that public-health-based estimates suggest that drug use is much more widespread, taking place in many other parts of the city where my family and I live.
Those “lower-income, minority neighborhoods” contain the barbershop where I take my son for his monthly haircut and our favorite hoagie shop. Would I let him run ahead of me if I knew that simply setting foot on those sidewalks would make him more likely to be seen as a criminal in the eyes of the law?
The risks are even more acute (and unavoidable) for those who can afford to live only in the neighborhoods that A.I. would most likely lead officers to focus on.
There’s yet another opportunity for racial bias to infuse the process when risk-assessment algorithms created by A.I. and machine learning are used to help to sentence criminals, as they already are in courts around the country.
Without a commitment to ensure that the data being used to fuel A.I. doesn’t replicate historical racism, biases will be built into the foundation of many “intelligent” systems shaping how we live. It’s not that I want this technology to be rejected. There are ways to make A.I. work. But before it is used in law enforcement, it must be thoroughly tested and proven not to disproportionately harm communities of color.
Until then my excitement about advances in tech will always be cautious. Innovation is at the core of the careers that allow me and my husband to provide a good life for our family. The same innovation, if not used properly, could take it all away.