COURSES + WORKSHOPS
Presented as invited guest lecture at the 2022 Seattle Max Meetup, Oberlin Conservatory, and University of Oregon.
Talk and demonstration with accompanying Max code that introduces what Machine Learning (ML) is using simple linear regression as a first example, then expanding into other examples of unsupervised and supervised learning. Next I show how ML may be applied to musical tasks using Max, specifically using externals for Max to accomplish the tasks of 1) mapping between a gesture and synthesizer presets in real-time, 2) mapping between an input sound and a corpus of audio in real-time using concatenative synthesis, and 3) using neural networks trained on corpuses of audio to morph the timbre of a sound at the sample level in real-time. Lastly, some novel explorations in ML in Max are presented.
TECH 350: Audio-Visual Composition is an applied course aimed at electronic musicians interested in incorporating computer-made visuals into their art-making practice. We'll start with applied studies, accompanied by readings and example projects, on the history of film, film sound, video art, and animation. These studies will introduce and use video editing software (Premiere, Davinci Resolve, etc.). We'll then transition to computer-based visual art: studies, again accompanied by readings and historical examples, on the history of computer animation, 3D modeling, and virtual and augmented reality (XR). These studies will make use of Processing, Cinema4D, and other tools. Along the way we'll look at theories of multi-media: how to define relationships between sound and image, the role of experimental sound and image in political and social movements, and, ultimately, how to engage (or expand your engagement with) audio-visual composition in your praxis.
Digital Signal Processing (DSP) is a branch of engineering that is at the core of the digital media revolution of the past four decades, bringing us advances in audio-visual protocols (the MP3, video compression-decompression schemes, on-demand Internet streaming), sound processing techniques (real-time computer music performance, spectral audio signal analysis and re-synthesis, Music Information Retrieval), and ultimately a redefinition of how the world creates and consumes audio-visual art.
This course will provide a foundation of DSP theory, covering the time domain and the frequency domain (and their representations), discrete time signals, convolution, Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filter design, the z-Transform, Linear Time-Invariant (LTI) systems and non-linear systems, and the Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT), and Short-Term Fourier Transform (STFT). Take-home exercises will be done with pen-and-paper and/or Matlab (a programming language often used in DSP computing).
Unlike other DSP courses housed in a science department, this course will then switch to an applied, creation-centric mode, wherein a sequence of projects will help students define and build a personal set of DSP skills and tools that directly engage their own creative work, extending or deepening other projects undertaken for TIMARA classes or within their own techno-artistic praxis. These projects will make use of the Max programming language (and in particular Max’s [gen~] environment), Python and/or Matlab audio programming libraries, the JUCE application framework (for building audio processing plug-ins), and other applicable tools.
Contemporary electroacoustic composers and computer musicians wear many hats: composer, performer, programmer, critic, theorist, researcher, cultural musicologist, etc. Tech 201—and later Tech 202—provide a solid foundation in many of these areas, particularly electroacoustic music history and studio hardware and software. Tech 203 explores more deeply the musical, technological, and scholarly tools currently available to electroacoustic composers. Some topics will be completely new to you, and others will be related to your previous study in TIMARA. While we will deal extensively with certain hardware* and software, the course is not “about” said tools. Rather, it is organized based on musical/scholarly practices and approaches, each of which will inevitably draw on a variety of tools. All topics will be approached through a combination of reading, listening, and creative work with the ultimate goal of expanding your compositional resources and scholarly vocabulary.
*This course will take advantage of the multi-channel sound diffusion environment available in the TIMARA studios, putting emphasis on the ability of spatial audio and more generally the design of immersive sonic environments to uniquely engage extra-musical disciplines: sculptural and installation arts, theater and drama studies, and the environmental sciences, to name but a few.
TECH 101 is an introduction to the creation, technique, analysis, and history of electroacoustic music. This course takes a practice-based approach to learning electroacoustic music, with students applying the concepts and technologies taught in the course (acoustics, microphones, digital audio editing, mixing, synthesizers, virtual instruments, and more) to the composition of their own electroacoustic music. In addition, this course explores a substantial repertoire of electroacoustic music, from a wide range of styles and practices including experimental electronic music, EDM, ambient music, sound art, and beyond. This repertoire functions as a focal point for discussion and the development of listening and analysis skills. No previous experience with electroacoustic music is required, and all technological and artistic practices are welcome.
Electronic drum programming exists in nearly every genre of modern electronic music. This lecture showcases musics that foreground virtuosic use of drum programming technologies (synth drums, drum machines, software drum kits) and guides students through drum programming on their own, starting with simple sequencing onto “humanizing” drum programming, and lastly using post-programming effects.
This webpage introduces a multi-layer approach to creating interactive digital electronics (and includes an interactive electronics sandbox system programmed in Max), discusses some experiences I have had creating interactive digital systems for use in studio/performance settings, and includes a list of human and software resources for student’s further perusal.
This workshop introduces high school/college level beginners to the fundamentals of electroacoustic music, including a brief discussion of acoustics, recording techniques, digital audio, and synthesis, along with several brief assignments.
Processing is an "open source programming environment for teaching computational design and sketching interactive media software" developed by Ben Fry and Casey Reas.
This educational webpage introduces visitors to Processing fundamentals, some more advanced functionality, and lastly includes a set of high quality code templates created by myself to get students quickly up and running with several different archetypal uses of Processing: sound/video analysis, using shaders, connecting Processing to other programs with OSC, and object oriented programming in Processing.
This course explores the ways in which sound interacts with video. We will investigate this interaction by first discussing the history and contexts of electronic sound, video, and their relationship, establishing (or refreshing) audio and video editing skills, and then getting hands-on experience through creative projects. Projects include composing sound design for film, creating video art (that incorporates video-recorded and/or animated materials), and designing real-time interactive media projects. The target student of this course is a musician interested in expanding their relationship to video and multimedia. No experience with audio or video technologies is required, although it is welcome.
RESEARCH
The interactions between different living and non-living agents and their surroundings within an environment are complex, multidimensional, and self-organizing. Within the context of electroacoustic music composition, this web of organic activity presents a fascinating potential to dictate or inform sonic gestures, textures, and the formal structure of a work. The extraction of these interactions for musical use, however, is easier said than done, and requires a method for translating a representation of an environment to a sound-mappable model. In the case of this work, the representation of the environment used is a stereo field recording, a directional sonic record of the (audible) events within an environment. A deconstruction of this model may then be mapped onto other sounds (or sound generators), ultimately creating a ”sonification of a sound”, a map from a transcription of the sonic interactions of an environment onto an entirely different sound world. Using methodologies at the intersection of bioacoustics and music information retrieval, the author designed and implemented a software system, AcousTrans, that facilitates such a mapping process. Segmented events within a stereo sound recording are intelligently mapped onto multi-channel sound events from another corpus of sounds using a k-nearest neighbors search of a k-dimensional tree constructed from an analysis of acoustic features of the corpus. The result is an interactive sound generator that injects the organicism of environmental soundscape recordings into the sequencing, processing, and composing of immersive electroacoustic music. This work was created within the context of bioacoustic analysis of intertidal oyster reefs, but is applicable to any environmental soundscape that may be effectively decomposed using the described method.
Throughout recorded human history, experiences and observations of the natural world have inspired the arts. Within the sonic arts, evocations of nature permeate a wide variety of acoustic and electronic composition strategies. These strategies artistically investigate diverse attributes of nature: tranquility, turbulence, abundance, scarcity, complexity, and purity, to name but a few. Within the 20th century, new technologies to understand these attributes, including media recording and scientific analysis, were developed. These technologies allow music composition strategies to go beyond mere evocation and to allow for the construction of musical works that engage explicit models of nature (what has been called ‘biologically inspired music’). This paper explores two such deployments of these ‘natural sound models’ within music and music generation systems created by the authors: an electroacoustic composition using data derived from multi-channel recordings of forest insects (Luna-Mega) and an electronic music generation system that extracts musical events from the different layers of natural soundscapes, in particular oyster reef soundscapes (Stine). Together these works engage a diverse array of extra-musical disciplines: environmental science, acoustic ecology, entomology, and computer science. The works are contextualized with a brief history of natural sound models from pre-antiquity to the present in addition to reflections on the uses of technology within these projects and the potential experiences of audiences listening to these works.
Recent advancements have made wave-digital models a popular method for simulating analog audio circuits. Despite this, wave-digital modeling techniques have remained challenging to implement for amateurs due to high model complexity. Our library provides a straightforward platform for implementing wave-digital models as real-time digital audio effects.
In this paper, we demonstrate how WDmodels is used to implement wave-digital models containing nonlinear dipoles, such as diodes, and linear R-type adaptors. We describe the library-specific implementation of the connection tree, a data structure commonly used when implementing wave-digital models. We also detail the use of common wave-digital adaptors that have already been implemented in the library. We show how the library may be extended to complex wave-digital models through the implementation of custom adaptors. In order to demonstrate the flexibility of the library, we also present implementations of several audio circuits, including the equalization section of the Pultec EQP-1a program equalizer. Finally, we compare benchmarks from WDmodels and a C++ wave-digital modeling library to demonstrate code efficiency.
This paper describes the Murmurator, a flocking simulation-driven software instrument created for use with multi-channel speaker configurations in a collaborative improvisation context. Building upon previous projects that use natural system models to distribute sound in space, the authors focus on the potentials of this paradigm for collaborative improvisation, allowing for performers to improvise both with each other and to adapt to performer-controllable levels of autonomy in the Murmurator. Further, the Murmurator’s facilitation of a dynamic relationship be-tween musical materials and spatialization (for example, having the resonance parameter of a filter applied to a sound being dependent on its location in space or velocity)is foregrounded as a design paradigm. The Murmurator’s collaborative genesis, relationship to improvisational and multi-channel acousmatic performance practices, software details, and future work are discussed.
More and more academic and entertainment spaces are making use of high density loudspeaker arrays (HDLAs) and large scale, multiple screen video/projection environments. This paper concerns itself with the development of a shared language between artistic uses of these technologies, primarily within the fields of electroacoustic music and installation art. Both technologies engage with the immersive affordances of ultra-high resolution (in the auditory and visual domains, respectively) and much can be learned by comparing the strategies, successes, and failures of artists and technicians working with either or both technologies. My hope in investigating the interactions of these technologies in creative applications is to lead towards more informed, theoretically- and aesthetically-developed work using them. An historically-informed discussion of the current state and affordances of each technology is discussed, followed by examples of work that deeply engages with and/or bridges the divide between the technologies. Next, I present a shared language for the development of artistic works using these technologies, informed and contextualized by ecologically-driven media theories of Cook, Clarke, Jensen, and others. Lastly, future potentials of these technologies viewed through the lens of this shared language (including impacts on the fields of virtual and augmented reality, CAVE systems, and newly proliferating 360º video technologies) is presented.
A survey of the design, functionality, various uses, and communities built around modern music software. Includes discussions of music software's role in the creative process, modes of engagement afforded by different softwares, a list of characteristics of music software, and a brief analysis of four music softwares (Super Collider, Renoise, Max, and Ocarina).
This paper discusses the pieces Estilhaço 1 and 2, for percussion, live electronics, and interactive video created collaboratively by Fernando Rocha and Eli Stine. The conception of the pieces (including artistic goal and metaphor used), the context in which they were created, their formal structures and their relationship, the technologies used to create both their audio and visual components, and the relationships between the sound and corresponding images are discussed.