Sound Vision

In 2009 I began work on a music visualizer made in the MaxMSP/Jitter programming environment. I recently updated this software so thought I’d make a post about it.

Sound Vision includes two visualization types: a stereo FFT visualizer and Bark scale visualizer. Each has a variety of user-alterable parameters that modify both how the visualizers handle musical input (e.g. pre-gain) and directly alter their visual output (e.g. video feedback).

Here is a video that demonstrates the interface and visualization of Sound Vision:

The purpose of this software is to aid in the analysis of electroacoustic music, music that has a variety of characteristics (dynamic spatialization, stark timbre changes, for example) that are not found in musics commonly analyzed through other visualizers. In addition, this software can be used to visualize music for entertainment or accompaniment purposes (as it has been used in concert before).

Lastly, I’m including a paper I recently wrote that describes the Bark scale visualization module in detail.

Sound Vision 2.0: Bark Scale Visualizer

Medium Mapping

Since 2009, I’ve been interested in exploring medium mapping, that is, using data/gestures from one medium (motion, for example) to generate gestures in, or modify parameters of, another medium (audio, for example).

This interest first embodied itself in a piece created for Per Bloland’s Advanced DSP class at Oberlin Conservatory titled Motion-Influenced Composition that used the OpenCV objects in MaxMSP created by Jean-Marc Pelletier to parse out gestures and scene changes in video data from a video camera that were then used to generate synthesized sounds in real-time. Shortly afterwards, I extended the piece to include a video component created in Jitter (mapping the audio medium onto video). A video of a performance can be found here.

(To digress for a moment, the video component ended up being very multipurpose: it was a stereo FFT spectrogram that colored different frequencies based on their timbre and had a lot of customizable parameters. A video of it in use can be seen here.)

I continued being interested in this idea of medium mapping in the years that followed, and did a good deal of work in film sound design (which could fall under the audio mapped to video, or vice versa, category). This past summer as an artist in residence at the Atlantic Center for the Arts I revisited the idea from a real-time perspective, and created the piece Transference, which is a recasting and refinement of some of the ideas present in Motion-Influenced Composition.

Transference again uses the OpenCV MaxMSP objects to get gestures and other information from a video camera, but uses Processing to create the video. A number of other differences exist. The sounds used in Transference are not synthesized (as they were in Motion-Influenced Composition), are instead samples of voices, and the video component is 3-dimensional rather than the 2-D video in Motion-Influenced Composition. As a function of the sound material being real-world and the video not being a direct representation of the audio (i.e., a spectrogram) there is a great deal more abstraction in the medium mapping in Transference than in Motion-Influenced Composition. Playing with the abstraction between mapped mediums is fascinating to me, and I hope to explore it more in the future. A video of a performance of Transference can be found at 40:32 in this video.

I have also written a draft of a scholarly paper on Transference and Motion-Influenced Composition that can be seen below.

Motion-Influenced Composition and Transference: Performance Experiments in Medium Mapping