Memory, Decay, & Activism: William Basinski’s The Disintegration Loops

This semester, as part of Matthew Burtner’s “Musical Materials of Activism” class, I wrote a short analysis paper on William Basinski’s 2002 work The Disintegration Loops.

The Disintegration Loops consists of two pieces, d|p 1.1 and d|p 2.1, which were created by playing tape loops made by Basinski over extended periods of time on tape players. Due to dust in the tape heads of the tape players the loops naturally disintegrated during that time, and the result of this process was recorded onto a CD recorder. The program notes of The Disintegration Loops read “This music is dedicated to the memory of those who perished as a result of the atrocities of September 11th, 2001, and to my dear Uncle Shelley.”

In the paper I view The Disintegration Loops through many different lenses, including tape loop music, musical re-purposing, auto-destructive art, and elegiac music. I start by analyzing the sonic content of d|p 1.1: the 2 melodic voices it is made of and the additive and subtractive effects of disintegration. I then compare it to other tape music works by Reich and Eno. Next I put it within the context of auto-destructive art (after Gustav Metzger) and juxtapose it with I Am Sitting In A Room and the glitch music of Oval. Lastly I contrast it with Penderecki’s Threnody for the Victims of Hiroshima and Adams’ On The Transmigration of Souls, elegies that I believe primarily take advantage of collective memory, rather than personal memory as The Disintegration Loops does. 

Ultimately, in The Disintegration Loops and specifically d|p 1.1, Basinski has created a work of art where not only the characteristics of the work, but the medium of production (the recording of tape player disintegration) and the context of production themselves are born from the catastrophic event it is referencing. In other words, Basinski’s personal experience of the destruction of the World Trade Center, a seemingly immovable marvel of technology disintegrated into rubble, has bled into the composer’s practice, and not only a new work but a new work built on a new technique, custom-made for the composer’s experience of the catastrophe, is created. This modeling of the catastrophe and subsequent capturing of disintegration gave the composer control over a disintegrative process at a time when a real-world disintegration going on around him was completely out of his control. This intense relationship between composer, event, and artwork suggests the possibility that not only can The Disintegration Loops help Basinski through his personal memories, but the work could also potentially affect the collective memory, that is that the coping effect that The Disintegration Loops had for its composer could be extended to the rest of humanity affected by the catastrophe it was spawned from.

Peruse the full paper below.

Path

The New York-based, “lung-powered music” ensemble loadbang was in residency for several days at University of Virginia this year, and they performed my work “Path” for ensemble and live electronics.

The instrumentation of the ensemble is unique: high baritone voice, trumpet, trombone, and bass clarinet. I decided early on to treat the voice as another instrument, that is to not divide the group into solo voice with instrumental accompaniment. To reinforce this, the singer uses no text and instead uses different vowels for timbral variety (mimicking the timbral variety introduced in the brass through different kinds of muting). I also decided to make the material of the piece very simple: diatonic pitch collections in Ab and D. This allowed me to focus on texture and form.

The resulting piece is meditative and moody, switching from sections of resonant drone to chaotic, improvised textures and back again. The electronics of the piece incorporate electronic drones and pastoral recordings made on the East coast.

Words & Music

This semester I collaborated with three creative writing MFA students at University of Virginia to create three new multimedia works based on and incorporating poetry they wrote. The pieces were presented at the Second Street Gallery in Charlottesville as part of the Tom Tom Founders Festival 2015.

The first piece, “For My Brother”, was created in collaboration with Courtney Flerlage for fixed media:

The process for creating the piece involved initially creating the first section without Courtney’s voice, to get an idea for the kinds of textures and overall mood that meshed with both of our visions for the work. I then recorded Courtney reading the poem (both in a normal speaking voice and whispered). The voice was then chopped up, manipulated, and accompanied with materials that “painted” the text (e.g. “falling” in text -> some musical concept of falling in music). Lastly pitched material was added in (violin samples and manipulated train whistle) to tie the sections together timbrally.

The second piece, “BLUR”, was created in collaboration with Caitlin Neely for video art and live reading:

Creating video art for text was a new venture for me. I have done sound design for film and video art for live music in the past, but actually creating visuals to accompany words was new. I ended up creating a set of visuals that I mentally tied to parts of the text and then arranged them in time such that enough synchronicity was present for the audience to pair them in a meaningful way. I then went back through and added simple, descriptive sound cues to flesh out the texture.

The last piece, “Singing Saw” was created in collaboration with Matthew MacFarland, for live electronics and live reading:

Because of the focus of this piece on a musical saw, the first step to creating this piece was, of course, to record sounds of the musical saw. Along with this recording I also recorded guitar samples and a variety of foleys (apples falling, leaves movement, foot steps, etc.) used to accompany the reading of the text. I used foley and non-musical sounds to create the sense of sections within the work and instrumental samples to make the sections cohesive overall. Because of the constant story-telling accompaniment of the sounds in this work it could be classified as “Cinema for the Ears”.

Collaborating with poets was wonderful. Being able to dive into the musical world of a poem hidden beneath the text and bring it to life was a great deal of fun and work, and I look forward to doing it again.

Sound Vision

In 2009 I began work on a music visualizer made in the MaxMSP/Jitter programming environment. I recently updated this software so thought I’d make a post about it.

Sound Vision includes two visualization types: a stereo FFT visualizer and Bark scale visualizer. Each has a variety of user-alterable parameters that modify both how the visualizers handle musical input (e.g. pre-gain) and directly alter their visual output (e.g. video feedback).

Here is a video that demonstrates the interface and visualization of Sound Vision:

The purpose of this software is to aid in the analysis of electroacoustic music, music that has a variety of characteristics (dynamic spatialization, stark timbre changes, for example) that are not found in musics commonly analyzed through other visualizers. In addition, this software can be used to visualize music for entertainment or accompaniment purposes (as it has been used in concert before).

Lastly, I’m including a paper I recently wrote that describes the Bark scale visualization module in detail.

Sound Vision 2.0: Bark Scale Visualizer

Brief Thoughts on Telematic Art

Last night I was an audience member at the ZeroSpace conference, a conference on “distance and interactions”. From the conference’s webpage:

The events of ZeroSpace explore the theme of distance and interaction, examining how humans interact with one another and with our environment through new technologies.

As far as I can tell, telematic art is performance that utilizes telecommunications (using Skype or a similar service to connect 2 or more performance venues with live video and audio feeds simultaneously, for example) or art that explores presence, distance, and space in general. I was first exposed to telematic art 5 years ago through Scott Deal at Summer Institute for Contemporary Performance Practice at New England Conservatory, specifically through a presentation about his ensemble Big Robot, which is a trio that frequently incorporates telecommunications technology into their performances. When I first learned about the phenomenon of telematic performance I was skeptical: what is gained by performing with someone miles away over an audio and/or video feed when the alternative, having all performers in the same space, seems much more satisfying? The ZeroSpace concert made me think deeper about this concept and broadened my definition of telematic art.

In his introduction to the conference, Matthew Burtner mentioned Beethoven’s Leonore Overture, an example of an acoustic work that utilizes an offstage instrument (here are many more). This piece and other such pieces are using distance and specifically the physical removal of instruments from the performance space and the resulting muffled, disembodied sound artistically, an example of historical, “low-tech” telematic performance. Another example of work that redefined telematic art for me was the work presented by Erik Spangler titled Cantata for a Loop Trail. This piece takes place on the length of a looping trail in an outdoor park with Spangler as the guide to a group of audience members. Performers and music-making devices are scattered along the trail, coloring (aiming to enhance) the experience of hiking in the natural setting through sound. While not using telecommunications technology or distance per se, this performance engages with the idea of space and the audience physically moving through space as a compositional tool, the musical content of the piece a function of when and where the audience is at a given time.

Two uses of telecommunications stood out to me in the performance and research forum last night. First, Charles Nichols, a professor at Virginia Tech, was Skyped in from his office during the research forum, and gave a presentation on his work. Although not related to art, this use of telecommunications in the context of the conference caused me to realize how pervasive telematic performance is. The Superbowl Halftime show and other live streaming performances, any musical sounds heard over the telephone, and performance art over live webcams are all examples of telematic performance. If the live performance aspect of telematic art is dropped, all recordings and videos of performances are telematic art (in this case simply mediated by technology, not in realtime). Second, the “Virginia Tech/UVA Handshake Improvisation” on the concert involved 3 instrumentalists in Charlottesville and 2 instrumentalists in Blacksburg improvising with one another. As I was in the back of the room I had a poor line-of-sight to the local performers, so at times I was unable to tell the source of sounds (local or remote), although the local sounds emanated from instruments and the remote sounds came exclusively from the speakers. Because of the local/remote dichotomy of the performers the piece was something more than if it was simply the same 5 instrumentalists in one room creating the same sounds. It was sound and listening spanning miles to create an improvised piece of sonic art.

I am interested in learning more about this emerging and developing form of technology-mediated performance and art-making, and hope to see more successful uses of it in the future.

Integration of Limits Dance Piece

I had a piece on the 2014 University of Virginia Fall Experimental Dance Concert titled Integration of Limits that was made in collaboration with the Electronic Identity and Embodied Technology Atelier class, made possible by a grant from The Jefferson Trust.

The piece involved 7 dancers, 3 separate groups of choreographers (for each of the 3 sections of the piece), and video projection created using the Motive motion capture software. The piece explored the relationship between dancers and their embodiment in digital form, and featured video versions of a dancer accompanying the ensemble, manipulated motion-tracked movement, and duets with a video-projected dancer.

For the music I was asked to create something that alluded to the digital nature of the motion-tracked movement used, so I decided to use simple waves, repetitive “glitch” sounds (the same used in my video piece, Ring | Axle | Gear), and was heavily influenced by the work of Ryoji Ikeda. I triggered the cues for the piece using a MaxMSP program that I made that incorporates fading out/looping, etc., seen below.

image

The 3 sections of the piece each begin with a “calibration” sound, that I created using simple waves convoluted with short rain recordings, and then a canon consisting of each of the 7 dancers coming downstage and performing a combination, which I accompanied with 7 triggered sound files, each of which expands the range of the texture by +/-1.5 semitones. As each dancer comes downstage, their movement is jerkier, at a lower “bit depth” than the last dancer, and I represented this in the music through increasing tempi of clicking sounds and lower fidelity audio settings.

After this introduction, each section diverges to a new texture. The first section consists of a musical phrase created using gated triangle waves that is repeated over and over, speeding up (varispeed at +2 semitones) and distorting until it reaches a frenzy, accompanying the speeding of the dancers and the speeding of the onscreen digital representation of the dancers. The second section introduces short, abstracted snippets of a Strauss waltz recording, along with a short beeping sound and sidechain-compressed clicking sounds. The third section reveals the Strauss waltz, slowed 8x and recreated using a bank of sine waves, to remove it from its acoustic, orchestral context and into the simple wave context of the previous 2 sections. The gated triangle waves, slowed, end the piece, slowly pulsing and losing steam until nothingness. Video of the performance below:

Trained/"Trane'd" Music Improvisation Generator

As part of Alexa Sharp’s Artificial Intelligence class at Oberlin Nick Towbin-Jones and I created a program that utilized Markov Models to generate jazz improvisations over chords in the “style” of an artist. We focused on the solos of John Coltrane, but solos by any single line instrument (flute, trumpet, etc.) could be modeled.

The program takes in a transcription of a solo in MIDI format, a specially formatted list of chords (e.g. “Dm 4, Ad 2” for 4 beats of D minor chord, 2 beats of A diminished 7th), and then an “order” parameter, which determines roughly how many notes in the solo will be viewed as a unified “gesture” (our default was 3). The content of the solo is assumed to correspond exactly to the chords in the chord list (a solo played over a jazz standard).

The computer then builds a Markov model from 1) consecutive chords and 2) groups of notes and the next note (e.g. if order is 3, then “C D E F” would be parsed as “C D E” -> “F”, that is, “C D E” would be grouped as one gesture that is proceeded by “F”).

image

Once the model has been built, the program takes in another chord list, (most likely the same chord list used in the original input), parses it, and probabilistically generates notes to “fill” the chords based on the model we built. The result is a “new” solo over the chords, that has many characteristics of (the same “style” as) the solo that we trained the model on.

image

As part of the class we tested the output of our program on a group of people to determine if they were able to tell the difference between examples of the “real” vs. the generated solos. The result was, not unexpectedly, that they were able to tell the different, and more specifically were more accurate at telling the difference with longer examples than shorter examples. This shows that while our program was somewhat able to reproduce “style” on a small scale, in longer phrases it didn’t have enough “memory” to create musical gestures. Regardless, this system was very interesting to create and resulted in some interesting programming ideas and very strange computer-mediated improvisations.

You can read more in the full report below.

Life at International Sound Art Festival Berlin 2012

In 2012 I had a minute-long piece titled “Life” selected to be a part of the 60x60 Voice Mix, a part of the longstanding 60x60 project created by Robert Voisey. The Mix was played at the International Sound Art Festival Berlin 2012 in the Mitte Museum.

The piece’s source material is solely a recording of me saying “life”, which you can hear below.

I then took that sample and stretched it in various ways using the PaulStretch and native Pro Tools time-stretching algorithms.

I then divided these samples in a variety of ways: sectioning them over time (separating the “luh”, “eye”, and “fuh” phonemes, for example) and over frequency (separating the low, voiced sounds from the breathy, noisy sibilants). I then used a variety of effects and techniques, including granular synthesis, distortion, pitch-shifting, and more, to create different textures. I sculpted and organized these textures and ended up with the final, minute-long piece that was heard at the festival, below.

Sound & Video Collections

There are a number of really interesting and expansive collections of sounds, musical works, and videos on the internet, created by individuals over some prescribed period or in some prescribed number (365 days or 100 sounds, for example), or created by many artists under a common theme. I will be outlining three here.

First is Joo Won Park’s 100 Strange Sounds, a quirky collection of short videos with a focus on performance of “found sounds” manipulated by computer processing. For each video Park includes a short list of “Materials Used” and a descriptive “How It Was Made” paragraph. The videos vary wildly in terms of content, although Park includes “pieces with a similar sonic approach”, “complementary entries”, etc. at the end of many videos. Some of the most popular entries are No. 47, No. 3, and No. 37.

Second is a collection of Lumiere Videosinspired by early French filmmaker Louis Lumiere, created by Andreas Haugstrup Pedersen & Brittany Shoot. The Lumiere Videos have a manifesto that includes guidelines for creating the short videos:

• 60 Seconds Max.    • No Zoom

• Fixed Camera          • No editing

• No audio                  • No effects

There is huge variety in the places, people, and scenes depicted, ranging from boats in a harbor, a POV angle of an escalator, an artist finishing some works, to a cityscape. I learned about this project by constantly running into these videos while looking for videos on archive.org, a great resource for copyright-free videos. Because of the variety of topics in these videos they tend to infiltrate many searches, creating an interesting contrast to videos with multiple camera angles, quick editing, and blaring sound.

Last is a project by Joshua Goldsmith titled 365 Days of Sound. Each sound is 30 seconds long and varies from synthesizer sounds, foley sounds, instrumental sounds, to environmental sounds. Goldsmith also utilizes Pure Data to process some sounds. This project is the least known to me, but that being said looking through it I have found some interesting sounds.

These personal or community-based media collections are fascinating examples of projects done by people with interests in audio/video that reveal some of the less revealed aspects of creating music or video: experimentation, inspiration, and the unpolished work that goes into creating media.