Interactive audio-visual installation written in Max, with OSC connection to Touch Designer, and Node JS server for participant to download the music that they make with their motion. Premiered at the 2022 Sideways Festival in Helsinki and installed at the Finnish Museum of Technology from July to September 2023.
Read MoreEcoBobbles (2022-) →
Ecologically informed VST Plug-ins and M4L Devices for sound creation and manipulation.
Extensions of my 2019 dissertation, Modeling Natural Systems in Immersive Electroacoustic Music.
Read MoreMachine Learning in Max (2022-2024) →
A set of pedagogical programs written (mostly) in Max that introduce machine learning and its affordances within the Max programming language (including unsupervised and supervised learning, concatenative synthesis, and neural networks for raw audio generation).
Read MoreAudioSketcher (2019-2020)
In Processing, WIP!
ObieVerb (2020) →
Drop a recording onto the app, select one of Oberlin’s concert halls, pick the amount of reverb added, and export your recording in that hall! Made during the COVID-19 pandemic for the Oberlin community, in collaboration with Andrew Tripp.
Read MoreListening to the Virginia Barrier Islands (2018-2019)
A web-based tool that allows listeners to virtually navigate the birds of two of the Virginia Barrier Islands. Made in collaboration with Becky Brown.
Read MoreAcousTrans (2019)
AcousTrans (Acousmatic Translator) allows a user to load in a source stereo audio file (field recording or other environmental recording) and a destination corpus of audio files and interactively map the events, gestures, and structure of the source onto the destination. What results is a stereo or multi-channel audio file with gestural, rhythmic, and/or structural similarities to the source file, but with entirely different timbral characteristics: those of the destination corpus.
After a processes of filtering and segmentation, the acoustic features embedded in each source event may be used to select a similar sound within a user-selected destination corpus via concatenative sound synthesis. Using a k-nearest neighbors search algorithm on a k-dimensional tree constructed from the acoustic features of segments of each audio file in the audio file corpus, the subvector of acoustic features for a source event is mapped to the most similar sound within the destination corpus. You can also customize the weighting of the features, which can be useful to “tune” the system depending on the particular source and destination sounds (for example, de-emphasizing fundamental frequency estimation if only using sounds with no clear pitch center)
The Murmurator (2017-2018)
Flocking Simulation-Driven Multi-Channel Software Instrument for Collaborative Improvisation
Read MoreAcousMIDI (2018)
MIDI to Acousmatic Sound Mapper
Read MoreSpatial Ear Trainer (2018)
Software to Test and Train Spatialization Perception
Read MoreMeditative Sound Reactive Visualizer (2017)
Built for a showing of Brian Eno’s Reflection.
Read MoreDeluxe Vectorscope (2016)
Highly-Customizable Vectorscope-Based Audio Visualizer
Read MoreStarboard Projection-Mapped Visualizer (2016)
Visualizer for Custom-Built Instrument
Read MorePitter (2015)
Rhythmic Probabilistic Sample Player
Read More"Elements" Performance Software (2015)
Interactive Sound-Processing and Performance Software for my String Quartet “Elements”
Read MoreSound Vision (2014)
Colorful Stereo Acousmatic Music Visualizer
Read MoreTransference (2013)
Motion-Influenced Audio-Visual Software
Read MoreChiclets (2010)
Customizable Granular Synthesizer
Read More