There is no difference between computer art and human art

In December 1964, during a single night session in Englewood Cliffs, New Jersey, John Coltrane and his quartet recorded the entire supreme love. This jazz album is considered Coltrane’s masterpiece – the culmination of his spiritual awakening – and has sold a million copies. What it represents is all too human: a way out of dependency, a devotional quest, a hymn to God.


Five decades later and 50 miles down, over 12 hours in April and fueled by Monster energy drinks in a spare bedroom in Princeton, New Jersey, Ji-Sung Kim has written an algorithm to teach a computer to learn to play jazz. Kim, a 20-year-old Princeton sophomore, was in a rush – he had a quiz the next morning. The resulting neural network project, called deepjazz, was released on GitHub, generated a buzz of excitement and skepticism from Hacker News commentary, garnered 100,000 plays on SoundCloud, and was prominent at the Japan.

This half-century-old chasm, shrouded in saxophone brass and Python code, has seen an increase in computer-generated music and visual art of all methods and genres. Computational art in the era of big data and deep learning, however, is math for algorithms, capital-A. We must now embrace – whether to struggle or to caress – computer art.

In industry, there is a brutal algorithmic tension – “Efficiency, capitalism, commerce! versus ‘Robots are stealing our jobs!’ But for algorithmic art, the tension is more subtle. According to consulting firm McKinsey and Company, only 4% of the work done in the US economy requires “creativity at a median human level”. So for computer art – which is explicitly trying to zoom in on this little piece of this vocational pie – it’s not a question of efficiency or fairness, but of trust. Art requires emotional and phrenic investments, with the promised return of a shared part of human experience. When we look at computer art, the disturbing and frightening concern is: who is on the other end of the line? Is it Human? We might then worry that it is not art at all.

The promise of algorithms has powerful popular appeal. A search for the word “algorithm” in the web pages of the empirically-minded FiveThirtyEight site (where I am on staff) returns 516 results, at the time of this writing. I am personally responsible for several of them. In the age of big data, algorithms are expected to treat disease, predict Supreme Court decisions, revolutionize sports, and predict the beauty of sunsets. They will also, it is said, prevent suicide, improve your rocket, predict police misconduct and tell if a movie is going to explode.

The grandest potential applications of algorithms and artificial intelligence (AI) are often preceded by ostensibly more manageable proving grounds – games, for example. Before IBM’s question-and-answer computer, Watson, treats cancer, for example, it goes on TV quiz show Danger! Google’s AlphaGo took on a great human Go champion in a “Grand Challenge” for the AI. But these competitions aren’t mere stepping stones – they can be seen as affronts to humanity. One commenter, realizing Google’s program would win a game, said he “felt bad physically.”

It’s pretty much the same for computer art projects. Kim and his friend Evan Chow, whose code is used in deep jazz, are the youngest generation in a long line of computer “artists”. (These two aren’t exactly starving entertainers. This summer, Kim works at Merck and Chow at Uber.) As the three of us sat in a high-backed wooden booth at Café Vivian on the campus of Princeton, in fact, honest-to-God human jazz was playing over the speakers – Rahsaan Roland Kirk’s frenetic “Pedal Up” (1973) – and as Kim played deepjazz-generated samples to me from her laptop, we were inundated in an ungodly jazz+jazz=jazz moment.

“The idea is quite deep,” Kim said, as I struggled to decipher what was human in the cacophony. “You can use an AI to create art. It’s normally a process that we think of as immutably human. Kim agreed that deep jazz and computer art are often a testing ground, but he saw the ends as well as means.” I’m not going to use the word ‘disruptive’,” he said, then continued, “It’s crazy how AI could shape the music industry,” imagining an application based on technology like deepjazz. “You hum a melody and the phone plays your own custom AI-generated song.”

As a non-profit startup, the value of many computer art projects so far is their perception to promise. The public deepjazz demo is limited and improvises from a single song, “And Then I Knew” (1995) by the band Pat Metheny (Kim wasn’t quite sure how to pronounce “Metheny”). But the code is public, and it has been modified to tangle the Friends theme song, for example.

Of course, it’s not just jazz music, and not just deep jazz, that has been computer-processed – jigs and folksongs, a “Genetic Jammer”, polyphonic music, and more. were algorithmically screened.

Visual art, too, has been subject to algorithms for decades. Two engineers created this image – probably the first naked computer – at Bell Labs in Murray Hill, New Jersey, somewhere geographically between Coltrane and Kim, in 1966. The piece was exhibited at the Museum of Modern Art in 1968.

The New York Times reviewed one of the first exhibitions of computer art, in 1965 (just months after Coltrane’s recording session) featuring the work of two scientists and an IBM #7094 digital computer, in a gallery of New York, now long closed. “So far, the means are more interesting than the end,” he added. Time wrote. But the review, by the late Stuart Preston, continues in a surprisingly enthusiastic tone:

No matter what the future holds – and scientists predict a time when almost any type of paint can be computer generated – the actual artist’s touch will no longer play a role in the making of a work of art. art. On this day, the role of the artist will be to formulate mathematically, by arranging a set of dots in groups, a desired pattern. From then on, everything will be entrusted to the Deus Ex machine. Freed from the boredom of technique and the mechanics of the image, the artist will simply “create”.

The machine is just the brush – a human is holding it. There are indeed examples of computers helping musicians to simply “create”.

Emily Howell is a computer program. A 1990s creation by David Cope, now professor emeritus at the University of California, Santa Cruz, “she” was born out of Cope’s frustrating struggle to finish his own opera. (Howell’s compositions are performed by human musicians.)

This music is passable. It might even be good and, for me, it’s safely on the right bank of the strange valley. But another thing that makes it more interesting is the simple fact that I know it was composed by a computer. I’m interested as medium – an amplification of Cope’s artistic expression, rather than a sublimation. But the tension persists.

I also fell into other rabbit holes: for starters, the work of Manfred Mohr, an early pioneer of algorithmic art who is himself a (human) jazz musician as well as an artist. Namely his painting, P‑706/B (2000), based on a six-dimensional hypercube. I spent the next hour reading about Mohr, the man.


Courtesy of Manfred Mohr

Sometimes in “computer music” it’s also the other way around: humans name the melody, the software dances to it. And in one of those cases, the market spoke loud and clear. Vocaloids are singing synthesizers, developed by Yamaha, and anthropomorphized by the Japanese company Crypton. A popular Vocaloid, Hatsune Miku (the name translates to “the first sound from the future”), headlined a North American tour this year, where Miku appeared as a hologram, drawing lines around the block away for $75 tickets to New York’s Hammerstein Ballroom. Miku is a huge pop star, but not a human. “She” also appeared on the Late show with David Letterman.

So it’s increasingly just dorm hackers and cloistered academics pecking at computer art to show off their skills or get articles published. Last month, the Google Brain team announced Magenta, a project to use machine learning for the purposes described here, and posed the question, “Can we use machine learning to create art and convincing music?” (The answer is already pretty clearly “Yes,” but there you go.) The project follows in the footsteps of Google’s Deep Dream Generator, which reinvents images in artistic, dreamy (or nightmarish) ways, using neural networks.

But the honest truth to God, at the end of it all, is that this whole notion is kind of a sham: a distinction without a difference. ‘Computer art’ doesn’t really exist in any more provocative sense than ‘painting art’ or ‘piano art’. Algorithmic software was written by a human, after all, using human-imagined theories, using a human-built computer, using human-written specifications, using human-gathered materials, in a company made up of humans, using human-built tools, etc. Computer art is human art – a subset rather than a distinction. It is sure to release tension.

Another human commentator, after seeing the program beat the human champion at Go, felt physically great and struck a different note: “An amazing result for technology.” And a compliment to the incredible abilities of the human brain. So it is with computer art. It’s a compliment to the human brain – and a complement to oil paintings and saxophone brass.

Subscribe to get counterintuitive, surprising and impactful stories delivered to your inbox every Thursday

Olivier Roeder

This article originally appeared on Aeon and has been republished under Creative Commons.