If a picture is worth a thousand words, what happens after it’s wrung through a million calculations? That’s a question being asked (and answered) by Diana Lange. She takes relatively typical portrait photographs, then feeds them to snippets of code she’s constructed in Processing.
The results are probably unlike anything you’ve ever seen–and for good reason. Lange doesn’t start her process with any aesthetic in mind. Rather, she roughly coaxes the software to do her bidding.
“The great thing about working with algorithms is you don’t necessarily have to have a visual idea,” Lange tells Co.Design. “You can get there by messing around with code and parameters.”
Initially, Lange began her experiments inside a pure particle system. Various physics-based parameters could shape the interaction of these particles to create unforeseeable images. But Lange’s artistic breakthrough happened when she decided to connect these particle systems with actual image data. The results fall somewhere between Kinect wireframes and studio portraits, with a finish ranging from rough polygons to highly detailed 3-D skin.
“I look for balance between chaos and order, mathematical correctness and errors, as well as an area of conflict in micro and macro structures,” she explains. And while that may sound esoteric at first pass, look deeply into these images. See how just a few points can give rise to a whole framework of abstraction or accuracy. Even without understanding her work in Processing, you can still decipher how she operates, balancing the grey between sheer computation and her own digital photography. And in generative art, this pocket is precisely where the artist lives, sharing the creative process with a computerized cousin. It just so happens that, with algorithmic portraiture, the process and the end product both depict man mixing with machine.
[Hat tip: But Does It Float]