Brief Thoughts on Telematic Art

Last night I was an audience member at the ZeroSpace conference, a conference on “distance and interactions”. From the conference’s webpage:

The events of ZeroSpace explore the theme of distance and interaction, examining how humans interact with one another and with our environment through new technologies.

As far as I can tell, telematic art is performance that utilizes telecommunications (using Skype or a similar service to connect 2 or more performance venues with live video and audio feeds simultaneously, for example) or art that explores presence, distance, and space in general. I was first exposed to telematic art 5 years ago through Scott Deal at Summer Institute for Contemporary Performance Practice at New England Conservatory, specifically through a presentation about his ensemble Big Robot, which is a trio that frequently incorporates telecommunications technology into their performances. When I first learned about the phenomenon of telematic performance I was skeptical: what is gained by performing with someone miles away over an audio and/or video feed when the alternative, having all performers in the same space, seems much more satisfying? The ZeroSpace concert made me think deeper about this concept and broadened my definition of telematic art.

In his introduction to the conference, Matthew Burtner mentioned Beethoven’s Leonore Overture, an example of an acoustic work that utilizes an offstage instrument (here are many more). This piece and other such pieces are using distance and specifically the physical removal of instruments from the performance space and the resulting muffled, disembodied sound artistically, an example of historical, “low-tech” telematic performance. Another example of work that redefined telematic art for me was the work presented by Erik Spangler titled Cantata for a Loop Trail. This piece takes place on the length of a looping trail in an outdoor park with Spangler as the guide to a group of audience members. Performers and music-making devices are scattered along the trail, coloring (aiming to enhance) the experience of hiking in the natural setting through sound. While not using telecommunications technology or distance per se, this performance engages with the idea of space and the audience physically moving through space as a compositional tool, the musical content of the piece a function of when and where the audience is at a given time.

Two uses of telecommunications stood out to me in the performance and research forum last night. First, Charles Nichols, a professor at Virginia Tech, was Skyped in from his office during the research forum, and gave a presentation on his work. Although not related to art, this use of telecommunications in the context of the conference caused me to realize how pervasive telematic performance is. The Superbowl Halftime show and other live streaming performances, any musical sounds heard over the telephone, and performance art over live webcams are all examples of telematic performance. If the live performance aspect of telematic art is dropped, all recordings and videos of performances are telematic art (in this case simply mediated by technology, not in realtime). Second, the “Virginia Tech/UVA Handshake Improvisation” on the concert involved 3 instrumentalists in Charlottesville and 2 instrumentalists in Blacksburg improvising with one another. As I was in the back of the room I had a poor line-of-sight to the local performers, so at times I was unable to tell the source of sounds (local or remote), although the local sounds emanated from instruments and the remote sounds came exclusively from the speakers. Because of the local/remote dichotomy of the performers the piece was something more than if it was simply the same 5 instrumentalists in one room creating the same sounds. It was sound and listening spanning miles to create an improvised piece of sonic art.

I am interested in learning more about this emerging and developing form of technology-mediated performance and art-making, and hope to see more successful uses of it in the future.

Integration of Limits Dance Piece

I had a piece on the 2014 University of Virginia Fall Experimental Dance Concert titled Integration of Limits that was made in collaboration with the Electronic Identity and Embodied Technology Atelier class, made possible by a grant from The Jefferson Trust.

The piece involved 7 dancers, 3 separate groups of choreographers (for each of the 3 sections of the piece), and video projection created using the Motive motion capture software. The piece explored the relationship between dancers and their embodiment in digital form, and featured video versions of a dancer accompanying the ensemble, manipulated motion-tracked movement, and duets with a video-projected dancer.

For the music I was asked to create something that alluded to the digital nature of the motion-tracked movement used, so I decided to use simple waves, repetitive “glitch” sounds (the same used in my video piece, Ring | Axle | Gear), and was heavily influenced by the work of Ryoji Ikeda. I triggered the cues for the piece using a MaxMSP program that I made that incorporates fading out/looping, etc., seen below.

image

The 3 sections of the piece each begin with a “calibration” sound, that I created using simple waves convoluted with short rain recordings, and then a canon consisting of each of the 7 dancers coming downstage and performing a combination, which I accompanied with 7 triggered sound files, each of which expands the range of the texture by +/-1.5 semitones. As each dancer comes downstage, their movement is jerkier, at a lower “bit depth” than the last dancer, and I represented this in the music through increasing tempi of clicking sounds and lower fidelity audio settings.

After this introduction, each section diverges to a new texture. The first section consists of a musical phrase created using gated triangle waves that is repeated over and over, speeding up (varispeed at +2 semitones) and distorting until it reaches a frenzy, accompanying the speeding of the dancers and the speeding of the onscreen digital representation of the dancers. The second section introduces short, abstracted snippets of a Strauss waltz recording, along with a short beeping sound and sidechain-compressed clicking sounds. The third section reveals the Strauss waltz, slowed 8x and recreated using a bank of sine waves, to remove it from its acoustic, orchestral context and into the simple wave context of the previous 2 sections. The gated triangle waves, slowed, end the piece, slowly pulsing and losing steam until nothingness. Video of the performance below:

Trained/"Trane'd" Music Improvisation Generator

As part of Alexa Sharp’s Artificial Intelligence class at Oberlin Nick Towbin-Jones and I created a program that utilized Markov Models to generate jazz improvisations over chords in the “style” of an artist. We focused on the solos of John Coltrane, but solos by any single line instrument (flute, trumpet, etc.) could be modeled.

The program takes in a transcription of a solo in MIDI format, a specially formatted list of chords (e.g. “Dm 4, Ad 2” for 4 beats of D minor chord, 2 beats of A diminished 7th), and then an “order” parameter, which determines roughly how many notes in the solo will be viewed as a unified “gesture” (our default was 3). The content of the solo is assumed to correspond exactly to the chords in the chord list (a solo played over a jazz standard).

The computer then builds a Markov model from 1) consecutive chords and 2) groups of notes and the next note (e.g. if order is 3, then “C D E F” would be parsed as “C D E” -> “F”, that is, “C D E” would be grouped as one gesture that is proceeded by “F”).

image

Once the model has been built, the program takes in another chord list, (most likely the same chord list used in the original input), parses it, and probabilistically generates notes to “fill” the chords based on the model we built. The result is a “new” solo over the chords, that has many characteristics of (the same “style” as) the solo that we trained the model on.

image

As part of the class we tested the output of our program on a group of people to determine if they were able to tell the difference between examples of the “real” vs. the generated solos. The result was, not unexpectedly, that they were able to tell the different, and more specifically were more accurate at telling the difference with longer examples than shorter examples. This shows that while our program was somewhat able to reproduce “style” on a small scale, in longer phrases it didn’t have enough “memory” to create musical gestures. Regardless, this system was very interesting to create and resulted in some interesting programming ideas and very strange computer-mediated improvisations.

You can read more in the full report below.

Life at International Sound Art Festival Berlin 2012

In 2012 I had a minute-long piece titled “Life” selected to be a part of the 60x60 Voice Mix, a part of the longstanding 60x60 project created by Robert Voisey. The Mix was played at the International Sound Art Festival Berlin 2012 in the Mitte Museum.

The piece’s source material is solely a recording of me saying “life”, which you can hear below.

I then took that sample and stretched it in various ways using the PaulStretch and native Pro Tools time-stretching algorithms.

I then divided these samples in a variety of ways: sectioning them over time (separating the “luh”, “eye”, and “fuh” phonemes, for example) and over frequency (separating the low, voiced sounds from the breathy, noisy sibilants). I then used a variety of effects and techniques, including granular synthesis, distortion, pitch-shifting, and more, to create different textures. I sculpted and organized these textures and ended up with the final, minute-long piece that was heard at the festival, below.

Sound & Video Collections

There are a number of really interesting and expansive collections of sounds, musical works, and videos on the internet, created by individuals over some prescribed period or in some prescribed number (365 days or 100 sounds, for example), or created by many artists under a common theme. I will be outlining three here.

First is Joo Won Park’s 100 Strange Sounds, a quirky collection of short videos with a focus on performance of “found sounds” manipulated by computer processing. For each video Park includes a short list of “Materials Used” and a descriptive “How It Was Made” paragraph. The videos vary wildly in terms of content, although Park includes “pieces with a similar sonic approach”, “complementary entries”, etc. at the end of many videos. Some of the most popular entries are No. 47, No. 3, and No. 37.

Second is a collection of Lumiere Videosinspired by early French filmmaker Louis Lumiere, created by Andreas Haugstrup Pedersen & Brittany Shoot. The Lumiere Videos have a manifesto that includes guidelines for creating the short videos:

• 60 Seconds Max.    • No Zoom

• Fixed Camera          • No editing

• No audio                  • No effects

There is huge variety in the places, people, and scenes depicted, ranging from boats in a harbor, a POV angle of an escalator, an artist finishing some works, to a cityscape. I learned about this project by constantly running into these videos while looking for videos on archive.org, a great resource for copyright-free videos. Because of the variety of topics in these videos they tend to infiltrate many searches, creating an interesting contrast to videos with multiple camera angles, quick editing, and blaring sound.

Last is a project by Joshua Goldsmith titled 365 Days of Sound. Each sound is 30 seconds long and varies from synthesizer sounds, foley sounds, instrumental sounds, to environmental sounds. Goldsmith also utilizes Pure Data to process some sounds. This project is the least known to me, but that being said looking through it I have found some interesting sounds.

These personal or community-based media collections are fascinating examples of projects done by people with interests in audio/video that reveal some of the less revealed aspects of creating music or video: experimentation, inspiration, and the unpolished work that goes into creating media. 

"Industrial Revelations" Analysis II

Continuing to look at Natasha Barrett’s “Industrial Revelations” I am developing a loose thesis that is based on 3 levels of appreciation of, and my seeking to understand, her work in general and this piece specifically.

First, this work is sonically “masterful”: the sounds within it are rich, precisely composed, and highly varied, and I am interested in how Barrett achieves this. What effects and techniques are used to produce the sounds? How does she combine sounds to create gestures? A great deal of creatively used convolution and phase vocoder time-stretching is employed, which could be a result of software created by Øyvind Hammer, whom I know has worked with Barrett in the past.

Secondly, as mentioned in my first post, I have broken the types of sounds within the piece into 3 broad categories: humanly-produced sounds, machine sounds, and environmental sounds. I am interested in seeing how Barrett uses groupings and juxtapositions of these sound types in this composition. Below you can see a spreadsheet containing (an incomplete set of) tagged data of each sound in the piece. “mac” indicates a machine sound, “env” indicates an environmental sound, and “hum” indicates a humanly made sound. When tagging these sounds I am also analyzing how they are produced, but have not determined a concise way to mark this.

image

Lastly, the formal structure of the piece (and other pieces by Barrett) seems to be very related to other acousmatic pieces (characterized by sections with amorphous or sharply defined (gestural) boundaries that vary greatly in density and gestural rhythm), but still holds a certain mystery to me. What is the logic behind the form? How are sections structured internally? The formal structure is slowly being revealed to me as I listen to the piece over and over and also analyze which sound types (and their combinations) are used to define formal subdivisions within the piece.

Below you can see a complete sectional analysis of the piece as represented in EAnalysis. There are seven sections (including an extended coda), the majority of which contain codas themselves (usually long “reverb tails” that meld sections together). 

image

Once I have tagged all of the sounds and am happy with my formal analysis I will view the piece from several different analytical angles, most likely related to Simon Emmerson’s work. I’m excited to represent the tagged data extracted from the piece graphically and determine if any trends can be seen visually that I otherwise haven’t perceived aurally. I’m also interested in exploring why this piece impacts me so much, as a listener and composer. More to follow.

Ring | Axle | Gear

For my senior recital at Oberlin I created a tryptic video art piece titled Ring | Axle | Gear. Each section of the piece is around a minute and a half, and explores various ways in which the shapes ring, axle (line), and gear can be manipulated, accompanied tightly by sound design that uses a wide variety of synthesized and real world sounds. The piece was created using Adobe After Effects, several Trapcode effects from Red Giant Software, and Vade’s v002 plugins for the visuals, and Pro Tools, Soundhack, and MaxMSP for the audio.

Ring

I created the piece by first working a bit on the video, then matching that with the sound design, extending the audio a bit, matching that with the video, and so forth. The piece was an exploration of animation for me, and the first time I had created animation in a timeline environment (as opposed to the text-based environment of Processing or the visual object based environment of Jitter). The animation explores 2- vs. 3-dimensional space, transitioning violently or smoothly between them, and at times settling into a kind of 2.5-dimension world. I also explored “glitches” in the video software I was using: artifacts that come from rotating an object faster than it was intended to be rotated, or using over-saturation and video feedback to expand the color palette chaotically.

Axle

The audio was made using real world sounds that have been manipulated so much as to be almost completely unrecognizable or, on the other hand, audio that is completely unaffected (such as the metallic sounds in Axle). I also utilized “authentic” glitchy sounds, sounds that were the result of computational accidents, primarily corrupted audio files.

Gear

I’m interested in exploring animation in more depth in the future. My first goal is to bring what I’m doing more into the 21st century: I feel a great deal of Ring | Axle | Gear is trapped stylistically between analog and early digital video from the 1970s/80s and more modern video (creatively utilizing particles, 3D models, accurate shading and depth-of-field and other techniques that fast computers allow). My second goal is to explore medium mapping within fixed video: many video softwares (including After Effects) allow parameters of the video to be controlled by audio heuristics, e.g. the volume of the low frequencies of an audio file causing a video object to jump or deform. This technique will add more depth to the interaction between the video and the sound than could be achieved with “by hand” sound design.

Peak for solo percussion

I recently completed (a first draft of) a solo percussion piece titled Peak for Judith Shatin’s seminar class. The instrumentation of the piece is below:

image

The performance notes read:

This piece explores arm independence and the smooth transference of energy between multiple instruments within a gesture. Care should be taken to make these transfers as smooth and convincing as possible. More explicitly, the individual continuity of multiple simultaneous lines (played by the separate hands) is more important than absolute precision of the collective rhythm.

There should be a sense of energy continuing through each gesture, particularly within silences (i.e. each note should lead to the next). Likewise, the energy level of each section should feel relative to the sections preceding and proceeding it. Total running time ranges from 6 to 7 minutes, depending on chosen tempo and lengths of fermati.

This piece consists of 5 sections. First is the introduction, which establishes the conceptual gesture that gets used throughout the piece: a rhythm played on a single instrument that rises, peaks, and falls in volume and how it can relate to another similar “peak” on a different instrument. The glockenspiel is not used in the introduction.

Here is the piece’s key:

image

And the “Introduction” of Peak

image

Next is a section with material that spawned from an earlier, simpler (and more linear) idea for the piece, “complexified” in this form through substitutions and rearrangement. The premise is the following: the “peak” gestures are played on the drums, dovetailed into one another, with a glockenspiel gesture punctuating the “top” of each “peak”. In the first realization of this concept, each series of 6 peaks (grouped into 3 + 3) was rhythmically grounded in one subdivision of the tempo of the piece (quarter note, eighth note, etc.) and as this subdivision got smaller the number of notes in the glockenspiel gesture would contain more pitches (1 to 2 to 4, etc.). Further, these pitches were chords built on top of a simple descending line (C,B,Bb,Ab,G,Gb).

The section as it stands now varies the subdivision of each tempo and corresponding number of notes in the glockenspiel gesture with each “peak”, increasing in intensity for the whole of the second section. The speed within each gesture also becomes more varied as the section progresses (starting with static quarter notes but progressing to gestures that start with half notes and speed to triplet eighths, for example).

Next is the meta “peak” and plateau of the piece, a section that explores playing the tambourine by shaking (along with the shaker) and also interjects a climactic gesture in the glockenspiel: an A5 (a pitch that does not show up in the previous section) that crescendos/speeds, then diminuendos/slows, representing a bleeding of the “peak” gesture concept into that instrument and clearly separating the piece into two halves.

The fourth section returns to the texture of the second section except that the melody is ascending and modified (F,G,A,Bb,C,D) and the subdivision generally gets larger as the section progresses. New subdivisions are explored (dotted eighth, for example) and the vocabulary for the relationship between consecutive “peak” gestures is expanded from the second section.

Finally, the piece ends with an “epilogue” that explores the delay between two “peak” gestures briefly, before ending monophonically in the muted snare and then the muted low tom.

“Epilogue" of Peak

image

The range between exposed compositional transformations (the transformation from static to dynamically changing subdivisions, for example) and hidden systems at work (glockenspiel pitch structure) is large in this piece, and I’m curious to see how this comes across in performance. I will experiment with this more (or less, if it ends up being unsuccessful) in the future.

Intersection of Computation and Visuals

I wanted to quickly discuss a few programmers/artists who are doing work that explores the intersection of computation and visual art. These people have inspired me to create video art and interactive visual systems, to think about algorithmic/procedural composition tools in interesting ways, and also to rethink what a “gallery space” is in the 21st century.

First is Jared Tarbell’s Complexification | Gallery of Computation.

Tarbell states:

I write computer programs to create graphic images. 

With an algorithmic goal in mind, I manipulate the work by finely crafting the semantics of each program. Specific results are pursued, although occasionally surprising discoveries are made.

The images, interactive visual systems, and code that Tarbell creates is beautifully concise and free of any artistic “fat” (put another way, artistic choices are backgrounded to facilitate a deeper appreciation of the processes and algorithms at work, e.g. using a neutral, muted, black and white, or rainbow color palette). This choice, depending on your viewpoint, may make these works more or less engaging. A great deal of the work is done in Processing. For more work by Tarbell (albeit a bit older) check out the website of his company, Levitated.

Next is Mario Klingemann’s (AKA “Quasimondo’s”) Incubator.

This gallery has a huge range of quality, from utilitarian to silly to artistic. The level of interactivity is also variable, from tapping into the user’s webcam to simply being a smartly-programmed algorithm. These programs are primarily created with Adobe Flash or Processing. Some of the more interesting programs are Feedback, Pinupticon, and Cityscapes.  More works by Klingemann can be found on his tumblr.

Last is Casey Reas’s website.

This website acts as a gallery for the huge number of works Reas has done over the past 10 years. Reas is the co-creator (along with Ben Fry) of Processing, and has acted as the catalyzer for many changes in data visualization and internet typography design in the last 15 years. As with the work of Tarbell, these projects generally lack artistic “fat” or silliness, I feel, which can be viewed as positive or negative. Regardless, they are impressively done and computationally beautiful. To get a look at some of the work that has been facilitated by Reas and Fry through Processing, check out the Processing Exhibition.

The creation of visualizations of algorithms, interactive visual programs, and more generally art that incorporates computation and visual elements has developed extensively in the past 30 years. Different forms of abstraction have extended the creation of visuals into the computational domain (through experiments by artists and programmers) and extended computation to the creation of visuals in new ways (through more accessible programming tools such as Processing, Flash, and others).

Programs that utilize live streams of data (whether mined from the internet, from a controller, or from video cameras) are of particular interest, as video manipulation is catching up with the real-time capabilities of audio made possible in the 90s through increasing computer speeds.