This paper describes the Murmurator, a flocking simulation-driven software instrument created for use with multi-channel speaker configurations in a collaborative improvisation context. Building upon previous projects that use natural system models to distribute sound in space, the authors focus on the potentials of this paradigm for collaborative improvisation, allowing for performers to improvise both with each other and to adapt to performer-controllable levels of autonomy in the Murmurator. Further, the Murmurator’s facilitation of a dynamic relationship be-tween musical materials and spatialization (for example, having the resonance parameter of a filter applied to a sound being dependent on its location in space or velocity)is foregrounded as a design paradigm. The Murmurator’s collaborative genesis, relationship to improvisational and multi-channel acousmatic performance practices, software details, and future work are discussed.
More and more academic and entertainment spaces are making use of high density loudspeaker arrays (HDLAs) and large scale, multiple screen video/projection environments. This paper concerns itself with the development of a shared language between artistic uses of these technologies, primarily within the fields of electroacoustic music and installation art. Both technologies engage with the immersive affordances of ultra-high resolution (in the auditory and visual domains, respectively) and much can be learned by comparing the strategies, successes, and failures of artists and technicians working with either or both technologies. My hope in investigating the interactions of these technologies in creative applications is to lead towards more informed, theoretically- and aesthetically-developed work using them. An historically-informed discussion of the current state and affordances of each technology is discussed, followed by examples of work that deeply engages with and/or bridges the divide between the technologies. Next, I present a shared language for the development of artistic works using these technologies, informed and contextualized by ecologically-driven media theories of Cook, Clarke, Jensen, and others. Lastly, future potentials of these technologies viewed through the lens of this shared language (including impacts on the fields of virtual and augmented reality, CAVE systems, and newly proliferating 360º video technologies) is presented.
A survey of the design, functionality, various uses, and communities built around modern music software. Includes discussions of music software's role in the creative process, modes of engagement afforded by different softwares, a list of characteristics of music software, and a brief analysis of four music softwares (Super Collider, Renoise, Max, and Ocarina).
This paper discusses the pieces Estilhaço 1 and 2, for percussion, live electronics, and interactive video created collaboratively by Fernando Rocha and Eli Stine. The conception of the pieces (including artistic goal and metaphor used), the context in which they were created, their formal structures and their relationship, the technologies used to create both their audio and visual components, and the relationships between the sound and corresponding images are discussed.
SYLLABI + WORKSHOPS
This webpage introduces a multi-layer approach to creating interactive digital electronics (and includes an interactive electronics sandbox system programmed in Max), discusses some experiences I have had creating interactive digital systems for use in studio/performance settings, and includes a list of human and software resources for student’s further perusal.
This workshop introduces high school/college level beginners to the fundamentals of electroacoustic music, including a brief discussion of acoustics, recording techniques, digital audio, and synthesis, along with several brief assignments.
Processing is an "open source programming environment for teaching computational design and sketching interactive media software" developed by Ben Fry and Casey Reas.
This educational webpage introduces visitors to Processing fundamentals, some more advanced functionality, and lastly includes a set of high quality code templates created by myself to get students quickly up and running with several different archetypal uses of Processing: sound/video analysis, using shaders, connecting Processing to other programs with OSC, and object oriented programming in Processing.
This course explores the ways in which sound interacts with video. We will investigate this interaction by first discussing the history and contexts of electronic sound, video, and their relationship, establishing (or refreshing) audio and video editing skills, and then getting hands-on experience through creative projects. Projects include composing sound design for film, creating video art (that incorporates video-recorded and/or animated materials), and designing real-time interactive media projects. The target student of this course is a musician interested in expanding their relationship to video and multimedia. No experience with audio or video technologies is required, although it is welcome.