Trio

This past year I was commissioned by Ben Roidl-Ward and Tim Daniels, fantastic bassoonist and oboist, respectively, to create a piece for oboe, bassoon, percussion and live electronics. After many rewrites I ended up with a short, five section piece that includes a variety of textures and instrumental roles.

image

The first movement begins with chaos: each of the performers quickly playing improvised notes covering their entire range while the electronics sample and randomly repeat segments of this texture. Over a minute, the range of the notes is collapsed to the middle and the notes get longer (with the percussion “lengthening” notes through trills).

image

The next movement begins in unison, with long tones in the middle range accented by the percussion. The notes speed up and the percussion starts filling in the space within and in between the notes, leading to the third section, a percussion solo that reiterates and develops some of the gestures heard previously in the piece.

image

The percussion solo loses steam and leads to a repeated ostinato pattern. The winds enter on improvised, soloistic material over the percussion ostinato, which is developed and ornamented. Finally, the winds reach the peak and trough of their ranges, respectively, and the percussion is cued to play a final, highly ornamented gesture.

The last section opens with a loop in the percussion, to which is added a loop in the oboe, and then one in bassoon. The loops are harmonically ambiguous and of different lengths. The loops circulate for around 20 seconds and are then abruptly cut off, ending the work.

To Spring: Charlottesville

As part of the Revel at IX event used to raise money for The Bridge Progressive Arts Initiative I created a 15 minute video art loop (no sound) that was played all night during the after party.

To create the video I spent several weeks shooting footage at a variety of locations in Charlottesville at different times of the day. I then compiled the footage so that they made the following transitions during the duration of the piece:

Nature -> Architecture -> Urban

Morning -> Night

Intimate -> Public

Next, I matted all of the videos using a 5x5 grid of squares with a rising and falling “echo effect” that changes the number of visible squares on the screen from one to twenty and back again. The patterns vary from left to right, up to down, and a variety of different diagonal configurations.

This project was both my first foray into video art without sound and I also learned a great deal about Charlottesville: its nature and urban landscape.

Electroacoustic Music for Dance

This year I worked with dancer and choreographer Juliana Garber on music for 2 dance works she choreographed. The first was a solo piece titled Impulse that was premiered at the 2nd Biomorphic Dance Festival at The Secret Theater in New York City.

This work blends processed acapella samples, foley recordings, and sounds of New York City into a variety of frequently rhythmic, driving textures. The climax of the piece introduces instrumental sounds and unprocessed vocals.

The second piece extended both dance ideas and sonic material from the first, and was titled Impetus+. This work is for 6 dancers and is longer, at 12 minutes.

With more dancers I felt that I had more ability to utilize dramatic gestures, and because of the more substantial length of the work more transformational processes on the musical material occur. This piece was performed at The Greene Space in Queens, in the 3rd Biomorphic Dance Festival at the Hudson Guild Theater, and at Movement Research in Manhattan.

Memory, Decay, & Activism: William Basinski’s The Disintegration Loops

This semester, as part of Matthew Burtner’s “Musical Materials of Activism” class, I wrote a short analysis paper on William Basinski’s 2002 work The Disintegration Loops.

The Disintegration Loops consists of two pieces, d|p 1.1 and d|p 2.1, which were created by playing tape loops made by Basinski over extended periods of time on tape players. Due to dust in the tape heads of the tape players the loops naturally disintegrated during that time, and the result of this process was recorded onto a CD recorder. The program notes of The Disintegration Loops read “This music is dedicated to the memory of those who perished as a result of the atrocities of September 11th, 2001, and to my dear Uncle Shelley.”

In the paper I view The Disintegration Loops through many different lenses, including tape loop music, musical re-purposing, auto-destructive art, and elegiac music. I start by analyzing the sonic content of d|p 1.1: the 2 melodic voices it is made of and the additive and subtractive effects of disintegration. I then compare it to other tape music works by Reich and Eno. Next I put it within the context of auto-destructive art (after Gustav Metzger) and juxtapose it with I Am Sitting In A Room and the glitch music of Oval. Lastly I contrast it with Penderecki’s Threnody for the Victims of Hiroshima and Adams’ On The Transmigration of Souls, elegies that I believe primarily take advantage of collective memory, rather than personal memory as The Disintegration Loops does. 

Ultimately, in The Disintegration Loops and specifically d|p 1.1, Basinski has created a work of art where not only the characteristics of the work, but the medium of production (the recording of tape player disintegration) and the context of production themselves are born from the catastrophic event it is referencing. In other words, Basinski’s personal experience of the destruction of the World Trade Center, a seemingly immovable marvel of technology disintegrated into rubble, has bled into the composer’s practice, and not only a new work but a new work built on a new technique, custom-made for the composer’s experience of the catastrophe, is created. This modeling of the catastrophe and subsequent capturing of disintegration gave the composer control over a disintegrative process at a time when a real-world disintegration going on around him was completely out of his control. This intense relationship between composer, event, and artwork suggests the possibility that not only can The Disintegration Loops help Basinski through his personal memories, but the work could also potentially affect the collective memory, that is that the coping effect that The Disintegration Loops had for its composer could be extended to the rest of humanity affected by the catastrophe it was spawned from.

Peruse the full paper below.

Path

The New York-based, “lung-powered music” ensemble loadbang was in residency for several days at University of Virginia this year, and they performed my work “Path” for ensemble and live electronics.

The instrumentation of the ensemble is unique: high baritone voice, trumpet, trombone, and bass clarinet. I decided early on to treat the voice as another instrument, that is to not divide the group into solo voice with instrumental accompaniment. To reinforce this, the singer uses no text and instead uses different vowels for timbral variety (mimicking the timbral variety introduced in the brass through different kinds of muting). I also decided to make the material of the piece very simple: diatonic pitch collections in Ab and D. This allowed me to focus on texture and form.

The resulting piece is meditative and moody, switching from sections of resonant drone to chaotic, improvised textures and back again. The electronics of the piece incorporate electronic drones and pastoral recordings made on the East coast.

Words & Music

This semester I collaborated with three creative writing MFA students at University of Virginia to create three new multimedia works based on and incorporating poetry they wrote. The pieces were presented at the Second Street Gallery in Charlottesville as part of the Tom Tom Founders Festival 2015.

The first piece, “For My Brother”, was created in collaboration with Courtney Flerlage for fixed media:

The process for creating the piece involved initially creating the first section without Courtney’s voice, to get an idea for the kinds of textures and overall mood that meshed with both of our visions for the work. I then recorded Courtney reading the poem (both in a normal speaking voice and whispered). The voice was then chopped up, manipulated, and accompanied with materials that “painted” the text (e.g. “falling” in text -> some musical concept of falling in music). Lastly pitched material was added in (violin samples and manipulated train whistle) to tie the sections together timbrally.

The second piece, “BLUR”, was created in collaboration with Caitlin Neely for video art and live reading:

Creating video art for text was a new venture for me. I have done sound design for film and video art for live music in the past, but actually creating visuals to accompany words was new. I ended up creating a set of visuals that I mentally tied to parts of the text and then arranged them in time such that enough synchronicity was present for the audience to pair them in a meaningful way. I then went back through and added simple, descriptive sound cues to flesh out the texture.

The last piece, “Singing Saw” was created in collaboration with Matthew MacFarland, for live electronics and live reading:

Because of the focus of this piece on a musical saw, the first step to creating this piece was, of course, to record sounds of the musical saw. Along with this recording I also recorded guitar samples and a variety of foleys (apples falling, leaves movement, foot steps, etc.) used to accompany the reading of the text. I used foley and non-musical sounds to create the sense of sections within the work and instrumental samples to make the sections cohesive overall. Because of the constant story-telling accompaniment of the sounds in this work it could be classified as “Cinema for the Ears”.

Collaborating with poets was wonderful. Being able to dive into the musical world of a poem hidden beneath the text and bring it to life was a great deal of fun and work, and I look forward to doing it again.

Sound Vision

In 2009 I began work on a music visualizer made in the MaxMSP/Jitter programming environment. I recently updated this software so thought I’d make a post about it.

Sound Vision includes two visualization types: a stereo FFT visualizer and Bark scale visualizer. Each has a variety of user-alterable parameters that modify both how the visualizers handle musical input (e.g. pre-gain) and directly alter their visual output (e.g. video feedback).

Here is a video that demonstrates the interface and visualization of Sound Vision:

The purpose of this software is to aid in the analysis of electroacoustic music, music that has a variety of characteristics (dynamic spatialization, stark timbre changes, for example) that are not found in musics commonly analyzed through other visualizers. In addition, this software can be used to visualize music for entertainment or accompaniment purposes (as it has been used in concert before).

Lastly, I’m including a paper I recently wrote that describes the Bark scale visualization module in detail.

Sound Vision 2.0: Bark Scale Visualizer

Brief Thoughts on Telematic Art

Last night I was an audience member at the ZeroSpace conference, a conference on “distance and interactions”. From the conference’s webpage:

The events of ZeroSpace explore the theme of distance and interaction, examining how humans interact with one another and with our environment through new technologies.

As far as I can tell, telematic art is performance that utilizes telecommunications (using Skype or a similar service to connect 2 or more performance venues with live video and audio feeds simultaneously, for example) or art that explores presence, distance, and space in general. I was first exposed to telematic art 5 years ago through Scott Deal at Summer Institute for Contemporary Performance Practice at New England Conservatory, specifically through a presentation about his ensemble Big Robot, which is a trio that frequently incorporates telecommunications technology into their performances. When I first learned about the phenomenon of telematic performance I was skeptical: what is gained by performing with someone miles away over an audio and/or video feed when the alternative, having all performers in the same space, seems much more satisfying? The ZeroSpace concert made me think deeper about this concept and broadened my definition of telematic art.

In his introduction to the conference, Matthew Burtner mentioned Beethoven’s Leonore Overture, an example of an acoustic work that utilizes an offstage instrument (here are many more). This piece and other such pieces are using distance and specifically the physical removal of instruments from the performance space and the resulting muffled, disembodied sound artistically, an example of historical, “low-tech” telematic performance. Another example of work that redefined telematic art for me was the work presented by Erik Spangler titled Cantata for a Loop Trail. This piece takes place on the length of a looping trail in an outdoor park with Spangler as the guide to a group of audience members. Performers and music-making devices are scattered along the trail, coloring (aiming to enhance) the experience of hiking in the natural setting through sound. While not using telecommunications technology or distance per se, this performance engages with the idea of space and the audience physically moving through space as a compositional tool, the musical content of the piece a function of when and where the audience is at a given time.

Two uses of telecommunications stood out to me in the performance and research forum last night. First, Charles Nichols, a professor at Virginia Tech, was Skyped in from his office during the research forum, and gave a presentation on his work. Although not related to art, this use of telecommunications in the context of the conference caused me to realize how pervasive telematic performance is. The Superbowl Halftime show and other live streaming performances, any musical sounds heard over the telephone, and performance art over live webcams are all examples of telematic performance. If the live performance aspect of telematic art is dropped, all recordings and videos of performances are telematic art (in this case simply mediated by technology, not in realtime). Second, the “Virginia Tech/UVA Handshake Improvisation” on the concert involved 3 instrumentalists in Charlottesville and 2 instrumentalists in Blacksburg improvising with one another. As I was in the back of the room I had a poor line-of-sight to the local performers, so at times I was unable to tell the source of sounds (local or remote), although the local sounds emanated from instruments and the remote sounds came exclusively from the speakers. Because of the local/remote dichotomy of the performers the piece was something more than if it was simply the same 5 instrumentalists in one room creating the same sounds. It was sound and listening spanning miles to create an improvised piece of sonic art.

I am interested in learning more about this emerging and developing form of technology-mediated performance and art-making, and hope to see more successful uses of it in the future.

Integration of Limits Dance Piece

I had a piece on the 2014 University of Virginia Fall Experimental Dance Concert titled Integration of Limits that was made in collaboration with the Electronic Identity and Embodied Technology Atelier class, made possible by a grant from The Jefferson Trust.

The piece involved 7 dancers, 3 separate groups of choreographers (for each of the 3 sections of the piece), and video projection created using the Motive motion capture software. The piece explored the relationship between dancers and their embodiment in digital form, and featured video versions of a dancer accompanying the ensemble, manipulated motion-tracked movement, and duets with a video-projected dancer.

For the music I was asked to create something that alluded to the digital nature of the motion-tracked movement used, so I decided to use simple waves, repetitive “glitch” sounds (the same used in my video piece, Ring | Axle | Gear), and was heavily influenced by the work of Ryoji Ikeda. I triggered the cues for the piece using a MaxMSP program that I made that incorporates fading out/looping, etc., seen below.

image

The 3 sections of the piece each begin with a “calibration” sound, that I created using simple waves convoluted with short rain recordings, and then a canon consisting of each of the 7 dancers coming downstage and performing a combination, which I accompanied with 7 triggered sound files, each of which expands the range of the texture by +/-1.5 semitones. As each dancer comes downstage, their movement is jerkier, at a lower “bit depth” than the last dancer, and I represented this in the music through increasing tempi of clicking sounds and lower fidelity audio settings.

After this introduction, each section diverges to a new texture. The first section consists of a musical phrase created using gated triangle waves that is repeated over and over, speeding up (varispeed at +2 semitones) and distorting until it reaches a frenzy, accompanying the speeding of the dancers and the speeding of the onscreen digital representation of the dancers. The second section introduces short, abstracted snippets of a Strauss waltz recording, along with a short beeping sound and sidechain-compressed clicking sounds. The third section reveals the Strauss waltz, slowed 8x and recreated using a bank of sine waves, to remove it from its acoustic, orchestral context and into the simple wave context of the previous 2 sections. The gated triangle waves, slowed, end the piece, slowly pulsing and losing steam until nothingness. Video of the performance below: