women's head digitally disitegrating

Can algorithmic justice save us from the coming artificial intelligence dystopia?

I recently learned about an emerging field, algorithmic justice, from two friends I met in grad school. Algorithmic justice is a movement to address algorithmic bias in technology and artificial intelligence, which, as defined by the Algorithmic Justice League, is “like human bias [and] can result in exclusionary experiences and discriminatory practices.” As a self-proclaimed community organizer in our digital age, this piqued my concerns and interest on how AI is relevant to feminism and other social justice movements. 

Artificial intelligence—in contrast to natural intelligence that humans and animals have—is the capacity for a device to perceive its surrounding environment and take action to achieve a specific goal. AI is romanticized and dramatized in Hollywood films, yet its development is not mythical or a concern of the future—it is infiltrating our daily lives already. And without the development of ethical standards, AI has a tendency to exacerbate our pre-existing biases around race, gender, class, health, and so much more.

Zeynep Tufecki, self-proclaimed techno-sociologist, suggests in her recent TEDTalk that AI is building a dystopia “one click at a time.” I’m sure many of us have had social media ads follow us around for a week or have been shocked when a Facebook ad for something we didn’t realize we wanted appears and entices us to purchase it. At first glance, this use of digital technologies just seems like a contemporary version of TV commercials and subway ads, but Tufecki explains that it means so much more.

Persuasion architectures, which are mapping processes to understand consumers’ purchasing patterns in relation to sales patterns on a website, identify a person’s weaknesses based on their digital behavior and social media consumption. Every search, click, status, and photo upload is data that is then used to create heuristics, or characteristics, about us and make assumptions about our interests and future behaviors. This form of data collection is ripe to use for discrimination.

Marketing companies and electoral organizers have long used assumptions about a community’s behavior in a variety of ways. That is arguably a good thing when you’re able to cut voter turf based on people’s likelihood to vote when it’s GOTV weekend and you can only knock on so many doors. Cathy O’Neil, mathematician and author of Weapons of Math Destruction, recognizes how vital tailoring messages is when mobilizing others is but cautions us: “What’s efficient for campaigns is inefficient for democracies.”

Let’s take flight information as an example. Tufecki explains that as machine learning absorbs every click, comment, search, and purchase, the system makes characteristics based on each of us to then predict future behaviors—like, for example, if a person is likely to purchase a ticket to Las Vegas. It’s possible for machine learning to predict that people who are bipolar and entering a manic phase are likely to be prime customers given predicted spending and gambling habits, and for companies to therefore target people based on specific trends in their online behavior.

Tufecki anecdotally shares the story of a computer scientist who was faced with a decision about whether to exploit a persuasion architecture. He successfully tested the possibility of diagnosing mania based on users’ social media posts before a clinical diagnosis was made. This is concerning because the computer scientist didn’t fully understand how or why this test worked due to the natural limitations of understanding technology. And it is troubling that there is no regulation on how to use this vital information. In one scenario this tool could be used to identify who is likely in a manic state in order to connect them to mental health services, or alternatively it could be used to target them for flight purchases.

Algorithms also tend to perpetuate echo chambers and control voting habits on the internet. After you first search for a video on YouTube, the suggestion sidebar will provide you with increasingly polarized videos. So if a person is watching a video about anti-choice activists or alt-right marches, the succeeding videos will morph into an echo chamber of extremist ideologies. The same is true for progressive viewpoints. By creating this build up, YouTube keeps viewers on their site longer than intended and the lack of diversity in the video suggestions easily perpetuates political polarization.

Algorithms aren’t only tracking our individual social media habits for capitalist purposes, but are also increasingly used to digitize court proceedings and the criminal justice system. Recently there have been pilot programs that assign criminal defendants a computerized risk score to assess the likelihood of recidivism, which is based on previous court decisions and the current prison population. As Michelle Alexander’s The New Jim Crow and Ava DuVernay’s Thirteenth have shown, the criminal justice system is biased against people of color, especially Black and Latino defendants. Data can’t be neutral given that the methods used to collect data and the people themselves who input data are biased, and are therefore part of a larger systemic problem of racism.

In October, Kate Crawford and Meredith Wittaker, cofounders of AI Now, published a policy report on the urgent need to create ethical standards to manage the use and impact of artificial intelligence. They wrote, “New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI.” In her interview with Wired.com, Crawford points out that “when we talk about ethics, we forget to talk about power.” Understanding power would help to understand the underlying systemic injustice rooted in the ways algorithms are used to exploit people’s behaviors and continue inequity.

The Algorithmic Justice League, led by Joy Buolamwini, provides us with hope as they continue to develop concrete steps for activists, artists, coders, companies, academics, and legislators to work together to create and implement ethical standards with an anti-oppressive lens on power. This weekend, November 17-19, Data for Black Lives will be livestreaming their conference, which will interrogate many of the concerns brought up by Tufecki, O’Neil, Crawford, and Wittaker. As activists committed to mental health, racial justice, and feminism, it is important we stay connected to the ways that our digital age is directly impacting our personal lives and the communities we tirelessly are fighting alongside.

Header image via

Amanda R. Matos, proud Nuyorican from the Bronx, NY, is the co-founder of the WomanHOOD Project, a Bronx-based youth-led organization for young women of color. She is dedicated to empowering communities of color through capacity building, political education, and civic engagement. Amanda has led community organizing and policy initiatives at Planned Parenthood of New York City and Girls for Gender Equity. She is currently pursuing a master's degree in public policy at the Harvard Kennedy School of Government as a Sheila C. Johnson Fellow. On her free time, Amanda eats doughnuts and watches great TV shows like Jane the Virgin and Blackish.

Amanda R. Matos is a community organizer and reproductive justice activist from the Bronx, NY.

Read more about Amanda

Join the Conversation