Sound & Video Collections

There are a number of really interesting and expansive collections of sounds, musical works, and videos on the internet, created by individuals over some prescribed period or in some prescribed number (365 days or 100 sounds, for example), or created by many artists under a common theme. I will be outlining three here.

First is Joo Won Park’s 100 Strange Sounds, a quirky collection of short videos with a focus on performance of “found sounds” manipulated by computer processing. For each video Park includes a short list of “Materials Used” and a descriptive “How It Was Made” paragraph. The videos vary wildly in terms of content, although Park includes “pieces with a similar sonic approach”, “complementary entries”, etc. at the end of many videos. Some of the most popular entries are No. 47, No. 3, and No. 37.

Second is a collection of Lumiere Videosinspired by early French filmmaker Louis Lumiere, created by Andreas Haugstrup Pedersen & Brittany Shoot. The Lumiere Videos have a manifesto that includes guidelines for creating the short videos:

• 60 Seconds Max.    • No Zoom

• Fixed Camera          • No editing

• No audio                  • No effects

There is huge variety in the places, people, and scenes depicted, ranging from boats in a harbor, a POV angle of an escalator, an artist finishing some works, to a cityscape. I learned about this project by constantly running into these videos while looking for videos on archive.org, a great resource for copyright-free videos. Because of the variety of topics in these videos they tend to infiltrate many searches, creating an interesting contrast to videos with multiple camera angles, quick editing, and blaring sound.

Last is a project by Joshua Goldsmith titled 365 Days of Sound. Each sound is 30 seconds long and varies from synthesizer sounds, foley sounds, instrumental sounds, to environmental sounds. Goldsmith also utilizes Pure Data to process some sounds. This project is the least known to me, but that being said looking through it I have found some interesting sounds.

These personal or community-based media collections are fascinating examples of projects done by people with interests in audio/video that reveal some of the less revealed aspects of creating music or video: experimentation, inspiration, and the unpolished work that goes into creating media. 

"Industrial Revelations" Analysis II

Continuing to look at Natasha Barrett’s “Industrial Revelations” I am developing a loose thesis that is based on 3 levels of appreciation of, and my seeking to understand, her work in general and this piece specifically.

First, this work is sonically “masterful”: the sounds within it are rich, precisely composed, and highly varied, and I am interested in how Barrett achieves this. What effects and techniques are used to produce the sounds? How does she combine sounds to create gestures? A great deal of creatively used convolution and phase vocoder time-stretching is employed, which could be a result of software created by Øyvind Hammer, whom I know has worked with Barrett in the past.

Secondly, as mentioned in my first post, I have broken the types of sounds within the piece into 3 broad categories: humanly-produced sounds, machine sounds, and environmental sounds. I am interested in seeing how Barrett uses groupings and juxtapositions of these sound types in this composition. Below you can see a spreadsheet containing (an incomplete set of) tagged data of each sound in the piece. “mac” indicates a machine sound, “env” indicates an environmental sound, and “hum” indicates a humanly made sound. When tagging these sounds I am also analyzing how they are produced, but have not determined a concise way to mark this.

image

Lastly, the formal structure of the piece (and other pieces by Barrett) seems to be very related to other acousmatic pieces (characterized by sections with amorphous or sharply defined (gestural) boundaries that vary greatly in density and gestural rhythm), but still holds a certain mystery to me. What is the logic behind the form? How are sections structured internally? The formal structure is slowly being revealed to me as I listen to the piece over and over and also analyze which sound types (and their combinations) are used to define formal subdivisions within the piece.

Below you can see a complete sectional analysis of the piece as represented in EAnalysis. There are seven sections (including an extended coda), the majority of which contain codas themselves (usually long “reverb tails” that meld sections together). 

image

Once I have tagged all of the sounds and am happy with my formal analysis I will view the piece from several different analytical angles, most likely related to Simon Emmerson’s work. I’m excited to represent the tagged data extracted from the piece graphically and determine if any trends can be seen visually that I otherwise haven’t perceived aurally. I’m also interested in exploring why this piece impacts me so much, as a listener and composer. More to follow.

Ring | Axle | Gear

For my senior recital at Oberlin I created a tryptic video art piece titled Ring | Axle | Gear. Each section of the piece is around a minute and a half, and explores various ways in which the shapes ring, axle (line), and gear can be manipulated, accompanied tightly by sound design that uses a wide variety of synthesized and real world sounds. The piece was created using Adobe After Effects, several Trapcode effects from Red Giant Software, and Vade’s v002 plugins for the visuals, and Pro Tools, Soundhack, and MaxMSP for the audio.

Ring

I created the piece by first working a bit on the video, then matching that with the sound design, extending the audio a bit, matching that with the video, and so forth. The piece was an exploration of animation for me, and the first time I had created animation in a timeline environment (as opposed to the text-based environment of Processing or the visual object based environment of Jitter). The animation explores 2- vs. 3-dimensional space, transitioning violently or smoothly between them, and at times settling into a kind of 2.5-dimension world. I also explored “glitches” in the video software I was using: artifacts that come from rotating an object faster than it was intended to be rotated, or using over-saturation and video feedback to expand the color palette chaotically.

Axle

The audio was made using real world sounds that have been manipulated so much as to be almost completely unrecognizable or, on the other hand, audio that is completely unaffected (such as the metallic sounds in Axle). I also utilized “authentic” glitchy sounds, sounds that were the result of computational accidents, primarily corrupted audio files.

Gear

I’m interested in exploring animation in more depth in the future. My first goal is to bring what I’m doing more into the 21st century: I feel a great deal of Ring | Axle | Gear is trapped stylistically between analog and early digital video from the 1970s/80s and more modern video (creatively utilizing particles, 3D models, accurate shading and depth-of-field and other techniques that fast computers allow). My second goal is to explore medium mapping within fixed video: many video softwares (including After Effects) allow parameters of the video to be controlled by audio heuristics, e.g. the volume of the low frequencies of an audio file causing a video object to jump or deform. This technique will add more depth to the interaction between the video and the sound than could be achieved with “by hand” sound design.

Peak for solo percussion

I recently completed (a first draft of) a solo percussion piece titled Peak for Judith Shatin’s seminar class. The instrumentation of the piece is below:

image

The performance notes read:

This piece explores arm independence and the smooth transference of energy between multiple instruments within a gesture. Care should be taken to make these transfers as smooth and convincing as possible. More explicitly, the individual continuity of multiple simultaneous lines (played by the separate hands) is more important than absolute precision of the collective rhythm.

There should be a sense of energy continuing through each gesture, particularly within silences (i.e. each note should lead to the next). Likewise, the energy level of each section should feel relative to the sections preceding and proceeding it. Total running time ranges from 6 to 7 minutes, depending on chosen tempo and lengths of fermati.

This piece consists of 5 sections. First is the introduction, which establishes the conceptual gesture that gets used throughout the piece: a rhythm played on a single instrument that rises, peaks, and falls in volume and how it can relate to another similar “peak” on a different instrument. The glockenspiel is not used in the introduction.

Here is the piece’s key:

image

And the “Introduction” of Peak

image

Next is a section with material that spawned from an earlier, simpler (and more linear) idea for the piece, “complexified” in this form through substitutions and rearrangement. The premise is the following: the “peak” gestures are played on the drums, dovetailed into one another, with a glockenspiel gesture punctuating the “top” of each “peak”. In the first realization of this concept, each series of 6 peaks (grouped into 3 + 3) was rhythmically grounded in one subdivision of the tempo of the piece (quarter note, eighth note, etc.) and as this subdivision got smaller the number of notes in the glockenspiel gesture would contain more pitches (1 to 2 to 4, etc.). Further, these pitches were chords built on top of a simple descending line (C,B,Bb,Ab,G,Gb).

The section as it stands now varies the subdivision of each tempo and corresponding number of notes in the glockenspiel gesture with each “peak”, increasing in intensity for the whole of the second section. The speed within each gesture also becomes more varied as the section progresses (starting with static quarter notes but progressing to gestures that start with half notes and speed to triplet eighths, for example).

Next is the meta “peak” and plateau of the piece, a section that explores playing the tambourine by shaking (along with the shaker) and also interjects a climactic gesture in the glockenspiel: an A5 (a pitch that does not show up in the previous section) that crescendos/speeds, then diminuendos/slows, representing a bleeding of the “peak” gesture concept into that instrument and clearly separating the piece into two halves.

The fourth section returns to the texture of the second section except that the melody is ascending and modified (F,G,A,Bb,C,D) and the subdivision generally gets larger as the section progresses. New subdivisions are explored (dotted eighth, for example) and the vocabulary for the relationship between consecutive “peak” gestures is expanded from the second section.

Finally, the piece ends with an “epilogue” that explores the delay between two “peak” gestures briefly, before ending monophonically in the muted snare and then the muted low tom.

“Epilogue" of Peak

image

The range between exposed compositional transformations (the transformation from static to dynamically changing subdivisions, for example) and hidden systems at work (glockenspiel pitch structure) is large in this piece, and I’m curious to see how this comes across in performance. I will experiment with this more (or less, if it ends up being unsuccessful) in the future.

Intersection of Computation and Visuals

I wanted to quickly discuss a few programmers/artists who are doing work that explores the intersection of computation and visual art. These people have inspired me to create video art and interactive visual systems, to think about algorithmic/procedural composition tools in interesting ways, and also to rethink what a “gallery space” is in the 21st century.

First is Jared Tarbell’s Complexification | Gallery of Computation.

Tarbell states:

I write computer programs to create graphic images. 

With an algorithmic goal in mind, I manipulate the work by finely crafting the semantics of each program. Specific results are pursued, although occasionally surprising discoveries are made.

The images, interactive visual systems, and code that Tarbell creates is beautifully concise and free of any artistic “fat” (put another way, artistic choices are backgrounded to facilitate a deeper appreciation of the processes and algorithms at work, e.g. using a neutral, muted, black and white, or rainbow color palette). This choice, depending on your viewpoint, may make these works more or less engaging. A great deal of the work is done in Processing. For more work by Tarbell (albeit a bit older) check out the website of his company, Levitated.

Next is Mario Klingemann’s (AKA “Quasimondo’s”) Incubator.

This gallery has a huge range of quality, from utilitarian to silly to artistic. The level of interactivity is also variable, from tapping into the user’s webcam to simply being a smartly-programmed algorithm. These programs are primarily created with Adobe Flash or Processing. Some of the more interesting programs are Feedback, Pinupticon, and Cityscapes.  More works by Klingemann can be found on his tumblr.

Last is Casey Reas’s website.

This website acts as a gallery for the huge number of works Reas has done over the past 10 years. Reas is the co-creator (along with Ben Fry) of Processing, and has acted as the catalyzer for many changes in data visualization and internet typography design in the last 15 years. As with the work of Tarbell, these projects generally lack artistic “fat” or silliness, I feel, which can be viewed as positive or negative. Regardless, they are impressively done and computationally beautiful. To get a look at some of the work that has been facilitated by Reas and Fry through Processing, check out the Processing Exhibition.

The creation of visualizations of algorithms, interactive visual programs, and more generally art that incorporates computation and visual elements has developed extensively in the past 30 years. Different forms of abstraction have extended the creation of visuals into the computational domain (through experiments by artists and programmers) and extended computation to the creation of visuals in new ways (through more accessible programming tools such as Processing, Flash, and others).

Programs that utilize live streams of data (whether mined from the internet, from a controller, or from video cameras) are of particular interest, as video manipulation is catching up with the real-time capabilities of audio made possible in the 90s through increasing computer speeds.

State Change for solo flute

Switching gears to an entirely acoustic piece, I recently finished composing a piece for solo flute titled State Change. The program notes read:

Many substances exist in different phases of matter or “states”. When external energy is applied, solids becomes liquids become gases, and upon the cessation of that outside energy a gas becomes a liquid becomes a solid. Notably, during some state transitions properties of the substance change discontinuously, resulting in abrupt changes in the volume or mass of the substance.

This piece explores the sonic analogy of state changes, modifying the parameters of the sound of the flute to transition between different textures (“states”).

My process for composing this piece was a new one for me. Taking inspiration from advice given to me by Tom Lopez (who mentioned that it was a compositional technique used by Morton Subotnick) I first created a “Parameter Map”, which included parameters such as pitch, volume, and “airiness” mapped over time, with the time dimension striated into 24 discrete segments. The solid black areas correspond to the value over time of a parameter.

State Change Parameter Map

image

I then took the parameters at the start of each of the 24 segments and represented them as the first note in each of 24 measures in a score with corresponding pitch, note length, volume, etc. (e.g. D4, quarter note, mezzoforte). I call this document a “Notation Template”.

State Change Notation Template

image

From here, my compositional goal was to “fill in” the measures, making sure that transitions between different parametric values (molto vibrato to no vibrato, for example) were smooth and interesting. This part of the process was the most intriguing to me. It gave me freedom to create interesting gestures while not having to worry about the overall trajectory: at all points of the process I knew the goal of the gestures I was creating and what trajectories spawned them. Put more colloquially, “where I was coming from and where I was going to” was known, because of the notation template and parameter map.

Ultimately, I ended up modifying some of the timings and parametric values, but the process helped me to create a piece and formal structure that I otherwise would not have created. My favorite moments in the piece are those that mirror the “abrupt changes in… volume or mass" alluded to in the program notes, times when the changing parameters interact momentarily to create a new texture for a moment before diverging.

State Change Final Score

image

I hope to continue exploring this and other experimental compositional techniques in future pieces.

Medium Mapping

Since 2009, I’ve been interested in exploring medium mapping, that is, using data/gestures from one medium (motion, for example) to generate gestures in, or modify parameters of, another medium (audio, for example).

This interest first embodied itself in a piece created for Per Bloland’s Advanced DSP class at Oberlin Conservatory titled Motion-Influenced Composition that used the OpenCV objects in MaxMSP created by Jean-Marc Pelletier to parse out gestures and scene changes in video data from a video camera that were then used to generate synthesized sounds in real-time. Shortly afterwards, I extended the piece to include a video component created in Jitter (mapping the audio medium onto video). A video of a performance can be found here.

(To digress for a moment, the video component ended up being very multipurpose: it was a stereo FFT spectrogram that colored different frequencies based on their timbre and had a lot of customizable parameters. A video of it in use can be seen here.)

I continued being interested in this idea of medium mapping in the years that followed, and did a good deal of work in film sound design (which could fall under the audio mapped to video, or vice versa, category). This past summer as an artist in residence at the Atlantic Center for the Arts I revisited the idea from a real-time perspective, and created the piece Transference, which is a recasting and refinement of some of the ideas present in Motion-Influenced Composition.

Transference again uses the OpenCV MaxMSP objects to get gestures and other information from a video camera, but uses Processing to create the video. A number of other differences exist. The sounds used in Transference are not synthesized (as they were in Motion-Influenced Composition), are instead samples of voices, and the video component is 3-dimensional rather than the 2-D video in Motion-Influenced Composition. As a function of the sound material being real-world and the video not being a direct representation of the audio (i.e., a spectrogram) there is a great deal more abstraction in the medium mapping in Transference than in Motion-Influenced Composition. Playing with the abstraction between mapped mediums is fascinating to me, and I hope to explore it more in the future. A video of a performance of Transference can be found at 40:32 in this video.

I have also written a draft of a scholarly paper on Transference and Motion-Influenced Composition that can be seen below.

Motion-Influenced Composition and Transference: Performance Experiments in Medium Mapping

First Post & "Industrial Revelations" Analysis

I have tried in the past to maintain a blog and failed. My goal with this blog is to post something interesting every day, be it of my own work, just an idea, or a link to some other artist’s/group’s work. 

The first project I’m going to discuss is an on-going analytical paper focused on Natasha Barrett’s “Industrial Revelations”, an eleven-and-a-half minute electroacoustic composition that is the last cut on her 2002 album Isostasie that I am writing for Ted Coffey’s seminar class here at University of Virginia.

I have been fascinated by the work of Barrett for many years (hearing her piece Racing Through, Racing Unseen on Miniatures Concrétes is what got me interested in acousmatic-style electronic music).

My goal in the analysis is to explore how the piece traverses the areas between the sounds of machines (predominantly trains in the piece), human sounds (voice, residual sounds of human actions), and the sounds of their respective (and at times overlapping) environment (organic, textural sounds).

I am using Pierre Couprie’s EAnalysis program to analyze the structure and source material of the piece. Here’s a screen shot of the analysis in progress:

image

Another plan is to transcribe the last minute of the piece onto traditional staff notation. I feel that pitched material in acousmatic music is rarely analyzed melodically or harmonically, and that the last minute of this piece is a particularly good candidate to do this with: spatialization is barely existent and all of the sounds have focused pitch centers.

I will be writing the paper through the rest of November.