The interactions between different living and non-living agents and their surroundings within an environment are complex, multidimensional, and self-organizing. Within the context of electroacoustic music composition, this web of organic activity presents a fascinating potential to dictate or inform sonic gestures, textures, and the formal structure of a work. The extraction of these interactions for musical use, however, is easier said than done, and requires a method for translating a representation of an environment to a sound-mappable model. In the case of this work, the representation of the environment used is a stereo field recording, a directional sonic record of the (audible) events within an environment. A deconstruction of this model may then be mapped onto other sounds (or sound generators), ultimately creating a ”sonification of a sound”, a map from a transcription of the sonic interactions of an environment onto an entirely different sound world. Using methodologies at the intersection of bioacoustics and music information retrieval, the author designed and implemented a software system, AcousTrans, that facilitates such a mapping process. Segmented events within a stereo sound recording are intelligently mapped onto multi-channel sound events from another corpus of sounds using a k-nearest neighbors search of a k-dimensional tree constructed from an analysis of acoustic features of the corpus. The result is an interactive sound generator that injects the organicism of environmental soundscape recordings into the sequencing, processing, and composing of immersive electroacoustic music. This work was created within the context of bioacoustic analysis of intertidal oyster reefs, but is applicable to any environmental soundscape that may be effectively decomposed using the described method.
Read MoreWriting
Musical Aesthetics of the Natural World: Two Modern Compositional Approaches →
Throughout recorded human history, experiences and observations of the natural world have inspired the arts. Within the sonic arts, evocations of nature permeate a wide variety of acoustic and electronic composition strategies. These strategies artistically investigate diverse attributes of nature: tranquility, turbulence, abundance, scarcity, complexity, and purity, to name but a few. Within the 20th century, new technologies to understand these attributes, including media recording and scientific analysis, were developed. These technologies allow music composition strategies to go beyond mere evocation and to allow for the construction of musical works that engage explicit models of nature (what has been called ‘biologically inspired music’). This paper explores two such deployments of these ‘natural sound models’ within music and music generation systems created by the authors: an electroacoustic composition using data derived from multi-channel recordings of forest insects (Luna-Mega) and an electronic music generation system that extracts musical events from the different layers of natural soundscapes, in particular oyster reef soundscapes (Stine). Together these works engage a diverse array of extra-musical disciplines: environmental science, acoustic ecology, entomology, and computer science. The works are contextualized with a brief history of natural sound models from pre-antiquity to the present in addition to reflections on the uses of technology within these projects and the potential experiences of audiences listening to these works.
Read MoreA Wave-Digital Modeling Library for the Faust Programming Language →
Recent advancements have made wave-digital models a popular method for simulating analog audio circuits. Despite this, wave-digital modeling techniques have remained challenging to implement for amateurs due to high model complexity. Our library provides a straightforward platform for implementing wave-digital models as real-time digital audio effects.
In this paper, we demonstrate how WDmodels is used to implement wave-digital models containing nonlinear dipoles, such as diodes, and linear R-type adaptors. We describe the library-specific implementation of the connection tree, a data structure commonly used when implementing wave-digital models. We also detail the use of common wave-digital adaptors that have already been implemented in the library. We show how the library may be extended to complex wave-digital models through the implementation of custom adaptors. In order to demonstrate the flexibility of the library, we also present implementations of several audio circuits, including the equalization section of the Pultec EQP-1a program equalizer. Finally, we compare benchmarks from WDmodels and a C++ wave-digital modeling library to demonstrate code efficiency.
Read MoreThe Murmurator: A Flocking Simulation-Driven Multi-Channel Software Instrument for Collaborative Improvisation (2018)
This paper describes the Murmurator, a flocking simulation-driven software instrument created for use with multi-channel speaker configurations in a collaborative improvisation context. Building upon previous projects that use natural system models to distribute sound in space, the authors focus on the potentials of this paradigm for collaborative improvisation, allowing for performers to improvise both with each other and to adapt to performer-controllable levels of autonomy in the Murmurator. Further, the Murmurator’s facilitation of a dynamic relationship be-tween musical materials and spatialization (for example, having the resonance parameter of a filter applied to a sound being dependent on its location in space or velocity)is foregrounded as a design paradigm. The Murmurator’s collaborative genesis, relationship to improvisational and multi-channel acousmatic performance practices, software details, and future work are discussed.
Read MoreInvestigating a Framework on Artistic Uses of Large-Scale Multi-Channel Audio and Video Technologies (2017)
More and more academic and entertainment spaces are making use of high density loudspeaker arrays (HDLAs) and large scale, multiple screen video/projection environments. This paper concerns itself with the development of a shared language between artistic uses of these technologies, primarily within the fields of electroacoustic music and installation art. Both technologies engage with the immersive affordances of ultra-high resolution (in the auditory and visual domains, respectively) and much can be learned by comparing the strategies, successes, and failures of artists and technicians working with either or both technologies. My hope in investigating the interactions of these technologies in creative applications is to lead towards more informed, theoretically- and aesthetically-developed work using them. An historically-informed discussion of the current state and affordances of each technology is discussed, followed by examples of work that deeply engages with and/or bridges the divide between the technologies. Next, I present a shared language for the development of artistic works using these technologies, informed and contextualized by ecologically-driven media theories of Cook, Clarke, Jensen, and others. Lastly, future potentials of these technologies viewed through the lens of this shared language (including impacts on the fields of virtual and augmented reality, CAVE systems, and newly proliferating 360º video technologies) is presented.
Read MoreA Survey of Modern Music Software Ecosystems (2017)
A survey of the design, functionality, various uses, and communities built around modern music software. Includes discussions of music software's role in the creative process, modes of engagement afforded by different softwares, a list of characteristics of music software, and a brief analysis of four music softwares (Super Collider, Renoise, Max, and Ocarina).
Read MoreEstilhaço 1 & 2: Conversations between Sound and Image in the Context of a Solo Percussion Concert (2016)
This paper discusses the pieces Estilhaço 1 and 2, for percussion, live electronics, and interactive video created collaboratively by Fernando Rocha and Eli Stine. The conception of the pieces (including artistic goal and metaphor used), the context in which they were created, their formal structures and their relationship, the technologies used to create both their audio and visual components, and the relationships between the sound and corresponding images are discussed.
Read More