processing

Intersection of Computation and Visuals

I wanted to quickly discuss a few programmers/artists who are doing work that explores the intersection of computation and visual art. These people have inspired me to create video art and interactive visual systems, to think about algorithmic/procedural composition tools in interesting ways, and also to rethink what a “gallery space” is in the 21st century.

First is Jared Tarbell’s Complexification | Gallery of Computation.

Tarbell states:

I write computer programs to create graphic images. 

With an algorithmic goal in mind, I manipulate the work by finely crafting the semantics of each program. Specific results are pursued, although occasionally surprising discoveries are made.

The images, interactive visual systems, and code that Tarbell creates is beautifully concise and free of any artistic “fat” (put another way, artistic choices are backgrounded to facilitate a deeper appreciation of the processes and algorithms at work, e.g. using a neutral, muted, black and white, or rainbow color palette). This choice, depending on your viewpoint, may make these works more or less engaging. A great deal of the work is done in Processing. For more work by Tarbell (albeit a bit older) check out the website of his company, Levitated.

Next is Mario Klingemann’s (AKA “Quasimondo’s”) Incubator.

This gallery has a huge range of quality, from utilitarian to silly to artistic. The level of interactivity is also variable, from tapping into the user’s webcam to simply being a smartly-programmed algorithm. These programs are primarily created with Adobe Flash or Processing. Some of the more interesting programs are Feedback, Pinupticon, and Cityscapes.  More works by Klingemann can be found on his tumblr.

Last is Casey Reas’s website.

This website acts as a gallery for the huge number of works Reas has done over the past 10 years. Reas is the co-creator (along with Ben Fry) of Processing, and has acted as the catalyzer for many changes in data visualization and internet typography design in the last 15 years. As with the work of Tarbell, these projects generally lack artistic “fat” or silliness, I feel, which can be viewed as positive or negative. Regardless, they are impressively done and computationally beautiful. To get a look at some of the work that has been facilitated by Reas and Fry through Processing, check out the Processing Exhibition.

The creation of visualizations of algorithms, interactive visual programs, and more generally art that incorporates computation and visual elements has developed extensively in the past 30 years. Different forms of abstraction have extended the creation of visuals into the computational domain (through experiments by artists and programmers) and extended computation to the creation of visuals in new ways (through more accessible programming tools such as Processing, Flash, and others).

Programs that utilize live streams of data (whether mined from the internet, from a controller, or from video cameras) are of particular interest, as video manipulation is catching up with the real-time capabilities of audio made possible in the 90s through increasing computer speeds.

Medium Mapping

Since 2009, I’ve been interested in exploring medium mapping, that is, using data/gestures from one medium (motion, for example) to generate gestures in, or modify parameters of, another medium (audio, for example).

This interest first embodied itself in a piece created for Per Bloland’s Advanced DSP class at Oberlin Conservatory titled Motion-Influenced Composition that used the OpenCV objects in MaxMSP created by Jean-Marc Pelletier to parse out gestures and scene changes in video data from a video camera that were then used to generate synthesized sounds in real-time. Shortly afterwards, I extended the piece to include a video component created in Jitter (mapping the audio medium onto video). A video of a performance can be found here.

(To digress for a moment, the video component ended up being very multipurpose: it was a stereo FFT spectrogram that colored different frequencies based on their timbre and had a lot of customizable parameters. A video of it in use can be seen here.)

I continued being interested in this idea of medium mapping in the years that followed, and did a good deal of work in film sound design (which could fall under the audio mapped to video, or vice versa, category). This past summer as an artist in residence at the Atlantic Center for the Arts I revisited the idea from a real-time perspective, and created the piece Transference, which is a recasting and refinement of some of the ideas present in Motion-Influenced Composition.

Transference again uses the OpenCV MaxMSP objects to get gestures and other information from a video camera, but uses Processing to create the video. A number of other differences exist. The sounds used in Transference are not synthesized (as they were in Motion-Influenced Composition), are instead samples of voices, and the video component is 3-dimensional rather than the 2-D video in Motion-Influenced Composition. As a function of the sound material being real-world and the video not being a direct representation of the audio (i.e., a spectrogram) there is a great deal more abstraction in the medium mapping in Transference than in Motion-Influenced Composition. Playing with the abstraction between mapped mediums is fascinating to me, and I hope to explore it more in the future. A video of a performance of Transference can be found at 40:32 in this video.

I have also written a draft of a scholarly paper on Transference and Motion-Influenced Composition that can be seen below.

Motion-Influenced Composition and Transference: Performance Experiments in Medium Mapping