Neural networks–a common type of artificial intelligence–are infiltrating every aspect of our lives, powering the internet-connected devices in our homes, the algorithms that dictate what we see online, and even the computational systems in our cars. But according to an article published in the peer-reviewed journal Big Data & Society by Anton Oleinik, a sociology professor at Memorial University of Newfoundland, there’s one crucial area where neural networks do not outperform humans: creativity.
Researchers have projected that automation may claim 800 million jobs around the world by 2030. Others suggest that as many as half of American jobs may be under threat from automation. But amid all the handwringing about robots taking people’s jobs, Oleinik’s analysis is further evidence that AI will likely only replace repetitive tasks that humans aren’t particularly skilled at to begin with. Even as AI creeps into creative fields, it is still only doing the work of recommending ideas to a human designer, who is spared some of his or her job’s mindlessness but still makes the final call about what a website or app will look like.
So why are neural nets so bad at being creative? Neural networks are machine learning algorithms composed of layers of calculations that excel at ingesting vast amounts of data and finding every pattern within them. They fundamentally rely on statistical regression–which means that while they’re good at identifying patterns, they fail miserably to anticipate when a pattern will change, let alone connect one pattern to an unrelated pattern, a crucial ingredient in creativity. “Scholars in science and technology studies consider the capacity to trace linkages between heterogeneous and previously unconnected elements as a distinctive human social activity,” Oleinik writes. Unfortunately, creativity would be impossible without radical predictions, something regression analysis will never be able to do.
Second, because all patterns appear to be meaningful to an algorithm based purely on how prevalent they are in the data, neural networks fail to distinguish between which patterns are meaningful and which aren’t–an additional foundational element of creativity. Computers may come up with novel ideas, but they may not be valuable ideas because value is a collective agreement, dictated by groups of people.
Finally, because neural networks do not understand, let alone incorporate, outside context, they are unable to make adjustments based on social norms and interactions beyond the realm of their specific purpose and data set. In other words, they lack social intelligence, which is important for creativity since, “innovations are often embedded in social connections and relationships,” Oleinik says. For instance, an algorithm that is analyzing patterns among corporate leaders may well conclude that being male is an essential element of being a leader–something that is based on a history of bias and is socially agreed upon to be false.
As a result, he thinks that because neural networks are inferior to humans when it comes to identifying and interpreting symbols, acting socially, and making predictions, he doubts that neural net-powered artificial creativity will ever be able to match human creativity. “Creativity is hardly possible without one’s capacity to think metaphorically, to coordinate proactively and to make predictions that go beyond simple extrapolation,” Oleinik argues.
However, that doesn’t mean that neural nets aren’t excellent mimickers of creativity. “In the words of a sociologist,” Oleinik writes, “a robot powered by neural networks may be a good [a]ctor, i.e. someone who closely follows the script, but not a [s]ubject, i.e. someone who meaningfully changes and rewrites the imposed rules.”
For instance, a neural net would be excellent at studying all of Picasso’s paintings and producing a new work that copies the famed artist’s style. In fact, many contemporary artists have played with neural networks in exactly this way, creating new portraits that look like they could have been painted by an old master but are in fact computer-generated.
But what a neural net may never be able to do is look at Picasso’s paintings and respond to them in a way that meaningfully adds to the artistic conversation by generating new patterns. The neural net itself can never be in dialogue with the artistic past without a human there to give it intent–it is only a shallow imitator, devoid of true meaning. As prominent AI artist Mario Klingemann pointed out when his first AI artwork was up for auction, he is the artist, not the computer.
Ultimately, neural nets are not designed for creativity. Instead, they are designed for a world with clean, precise data. Oleinik points out that in a neural net’s ideal world, you remove data’s messiness–messiness that often comes from the unpredictability of human creativity. Take, for example, the optimal situation in which to create self-driving cars: roads where everybody, be they human or machine, follows the rules to a T, where there is no randomness whatsoever and everything is entirely predictable. Relinquishing human freedom on the road might be a trade-off that we’re willing to accept if it means that there are no more traffic accidents, but Oleinik points out that a broader scheme to reduce human predictability is not only Orwellian; it would have to stamp out creativity altogether to function.
It’s clear that for the time being, creativity will remain squarely the domain of humans. And perhaps, given neural nets’ inability to make creative inferences, interpret meaning, or understand social context, it should stay that way.