When Jackson Pollock splattered paint on canvas, he was labeled a genius. His contribution to abstract expressionism was seemingly boundless in its scope, but Pollock followed strict rules of his own devising. From the colors he chose to the constant flourish of his wrist, Pollock worked within tacit parameters to create distilled expression.
So if Pollock followed rules, could a computer’s logic do the same thing? City, Paint, Machine is an attempt to automate creativity by panGenerator. It’s a painting robot that interprets nearby street and sidewalk traffic as sprays of pigment.
“Although the paintings look abstract they are a result of some pretty precise calculations based on trajectories of people and cars moving on the street,” creator Piotr Barszczewski tells Co.Design. Yet that calculation. much like Polluck’s, is fascinating because of both its rules and its flexibility–logic surrounded by a constant element of randomness.
Barszczewski explains the machine as painting processed at a few levels. The first is observed chaos, a camera tracking the people and cars walking down the street. The second is the software layer, a predetermined set of rules interpreting this signal.
So the robot is turning chaos into order, but then that order is injected with a bit more chaos. City, Paint, Machine uses a pressure-based paint system that has a level of unpredictability akin to Pollock’s dripping brush. “Even though the trajectories were controlled with some degree of precision there was always an element of randomness introduced by the mixing ratio of the paint, splashes made from the pressurized stream, and the time it took to dry,” Barszczewski writes. (Plus, on top of all that, there was always a level of human decision, such as how long the robot kept painting, from a few hours to a few days.)
City, Paint, Machine isn’t the first painting robot and it won’t be the last. But it is an interesting, parallel-world-style case study on the automatization of art–and especially, the role of an artist as data interpreter. After all, when someone paints any landscape, they are really just filtering photons through rules in their mind’s eye, balancing the randomness of their medium with the steadiness of their hand. Can we earnestly claim that this robot is doing things all that differently?
[Hat tip: Triangulation]