Fast company logo
|
advertisement

CO.DESIGN

AI is remaking the world on its terms, and that’s a problem

Artificial intelligence is making it harder for humans to have agency in their own lives.

AI is remaking the world on its terms, and that’s a problem

[Source Image: Bigmouse108/Getty Images]

BY Liz Stinson7 minute read

The other day, my mom sent me an email to tell me what GPT-4 said about me. My mother is an attorney and ethics counsel at the State Bar of Wisconsin, where she specializes in emerging technologies, so this wasn’t necessarily out of the blue. But what GPT-4 told her about me was wildly inaccurate. While it was correct about the university where I am a tenured professor, it was wrong about my home department, my credentials, and pretty much everything else. 

After mom and I had a nice laugh, I opened Instagram and saw that Mar Hicks (of whose work I’m a big fan) recently posted about a book they did not write being attributed to them in GPT-4’s description of another scholar’s work. I opened my email again and saw that the website AI Weirdness had a very upsetting story about Bing’s GPT-powered search engine doubling down on the existence of a nonexistent post, which it subsequently fabricated to prove its own point. Ugh. 

I walked out to my car to start my commute to campus, a commute I’ve done for nearly ten years. But, like many people, no matter where I drive, I open Google Maps and let it suggest the optimal route to my destination. After a meeting and before running to class, I checked my email, and, given the limited time I have for my research these days, Academia.edu’s recommendation of a particular article for me to read seemed enticing. Subsequently it suggested that I utilize its AI “summary” feature to save time, telling me I could “read” 37 journal articles in the time it would take me to actually read one, or that I could have saved 12,082 minutes.

Meanwhile, in the offices of the Deans and the Provost, administrators are using Academic Analytics, which includes a predictive-analytic “discovery suite” that supposedly helps with strategic decisions about the University—decisions that include inferences and recommendations about individual-level faculty performance, areas of investment to grow, and grants to apply for.

[Source Image: Bigmouse108/Getty Images]

Who knows? Maybe Academic Analytics will tell my university to adopt something like EduSense, a “FitBit for your teaching skills,” which uses gesture and facial-expression recognition technology to make inferences about student engagement and offers professors recommendations for changing how they teach to be more “engaging.” And of course, once I’m home, the predictive-analytic convenience and efficiency continues, with my evening chock-full of recommendations from Amazon, Instagram, Netflix, the sleep app I use, and so on. 

All of this computational prediction, anticipation, and recommendation—designed with the intention to enhance convenience or efficiency—has led to something potentially much more ominous. It’s creating a world that privileges machine-legibility over human-legibility; a world where an entry someone occupies in a company’s database is considered more real than that person’s lived experience. This has material, and often dire, consequences. 

And it’s about to get much worse, as the hype around GPT and other emergent “AI” applications grows. If we are not careful, we will soon live in a world that functions in a way that is completely incomprehensible to people but perfectly suited to stochastic parrots. We will be completely clueless about why certain things are happening to us, why we’ve been classified in particular ways, allowed or disallowed certain social privileges, criminalized, or potentially even killed. 

Before things get even more dystopian than they currently are [gestures around], we need to do more than create industry-friendly guardrails. Regulation, litigation, prohibition, and, if necessary, sabotage are essential to combating the pursuit of complete machine-readability. 

For over a decade critics have warned about the role computation plays in shaping our everyday experiences. In 2011, for example, John Cheney-Lippold wrote that algorithmic systems of inference and recommendation “configure human life by tailoring its conditions of possibility.” This is an apt and succinct way to describe much of my daily life. But back in 2013, when I started my own work on algorithmic inference and recommendation, GPT-4 was still a long way off. No longer are we captivated or scared by recommendation systems, which seem like an accepted part of the “smart-object-ification” of everyday life and are everywhere—including our thermostats and refrigerators. 

[Source Image: Bigmouse108/Getty Images]

There is, however, a dangerous continuity between once-novel algorithmic inference and recommendation systems and generative AI, like GPT-4. One of the main arguments of my new book, Interfaces and Us, is that we are remaking the world and ourselves on computing’s terms. In the pursuit of technological efficiencies, we will eventually sacrifice the agency to make decisions about the things that matter most in our own lives.

That “tradeoff” I just mentioned? Data brokers are an essential part of it. I recently downloaded all of the data that Acxiom, one of the world’s largest data brokers, has on me. It’s 76 single-spaced pages and not much of it is “accurate” (at least according to my lived experience).

But this is one of the largest data brokers in the world, meaning that nearly every interaction I have with any corporate entity is likely influenced by this data in some way or another, which means that my everyday life is shaped, in significant ways, by this data. This is what I meant when I said that oftentimes the entry a person occupies in a company’s database is more real than their lived experience. 

I have been thinking about how, when data that Acxiom has is paired with something like GPT-4—which can pass the bar exam in the 90th percentile but also produces materially significant inaccuracies—our lives will absolutely no longer be our own. Our world will have been remade completely on machinic terms and we will be the ones suffering the consequences.

advertisement

In other words, soon we’re going to have no idea why certain things are happening to us. If we trust that “intelligence” is truly part of “AI” and that GPT-4 will eventually get things right, we will experience massively deleterious effects on the material conditions of everyday life, particularly those of us who do not own shares in the companies doing the damage.

Nothing epitomizes this dystopian future-present better than Franz Kafka’s The Trial, which opens with the main character being arrested without evidence or warrant for an unspecified crime. I first encountered a connection between The Trial and contemporary technology when reading Daniel Solove’s prescient and wonderfully titled, “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.” 

Solove argues compellingly that our world has become like The Trial, “which depicts a bureaucracy with inscrutable purposes that uses people’s information to make important decisions about them, yet denies the people the ability to participate in how their information is used.” The problems of such a society, Solove continues, “are problems of information processing.” Put another way: The correlations and inferences “AI” systems make, are often impenetrable for those most impacted by them. 

It is essential to see “innovations” like GPT-4 on these terms—for us to see the way the world is being remade on the terms of computing, and how we humans will deal with the fallout. Just like in Kafka’s story, these new AI systems will have made determinations based on correlations run on massive data sets; inferences made based on those correlations, and the kind of inaccuracies that plague GPT-4’s descriptions of nearly everything.

It’s important to note that incarcerated people, especially those from marginalized communities, have experienced this dire future through things like recidivism prediction software and its incredible bias and inscrutable logic, which foreshadow a world we are all about to enter. Without activist intervention, humans will become bystanders to the communications between computational systems of which we are neither the audience nor the users, but of which we bear the material repercussions. 

Not only are we forfeiting our human agency, but we will see a litany of incidents provoked by the deluge of misinformation that will most likely harm those who are already the most vulnerable in society. We will see a continued slide into precarity for every sector of labor. Meanwhile, people will be incarcerated, investments depleted, immigration statuses changed, and lives lost based on criteria and decisions that are completely inscrutable to people and which exclusively serve the optimization-seeking behavior of computing. 

[Source Image: Bigmouse108/Getty Images]

There needs to be a movement in the US in solidarity with other movements, perhaps most importantly the reenergized labor movement, that seeks to eliminate the “AI” technologies that are reshaping our world on machinic terms.

There is, of course, a precedent for this—the Luddites. Luddism was never anti-technology; it was opposed to technologies that exploited labor and furthered the interests of the capitalist class. ChatGPT is no different. It’s a lot cheaper to pay people in a place where labor is already very cheap to train an instance of ChatGPT than it is to pay a lawyer in the US who would be “replaced” by it. 

Those who will reap the benefits of ChatGPT will be those already in positions of wealth and power, and we should not fool ourselves into thinking otherwise. The rest of us will continue our slide into unbearable instability. We need to take a step back and realize that we don’t need many of the advancements in computing that have been marketed to us as essential for optimizing our lives and our world. 

In a better, more equitable, more just world, one that redistributes the enormous wealth that already exists, innovation would function entirely differently. This is why understanding ChatGPT’s use as an intersectional issue connected with global political-economic history—one that is built on exploitation—is essential. And it’s why a radical response is necessary.

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics