Machine Learning in Max (2022-2024)

Machine Learning in Max (2022-2024)

A set of pedagogical programs written (mostly) in Max that introduce machine learning and its affordances within the Max programming language (including unsupervised and supervised learning, concatenative synthesis, and neural networks for raw audio generation).

Read More

ObieVerb (2020)

ObieVerb (2020)

Drop a recording onto the app, select one of Oberlin’s concert halls, pick the amount of reverb added, and export your recording in that hall! Made during the COVID-19 pandemic for the Oberlin community, in collaboration with Andrew Tripp.

Read More

AcousTrans (2019)

AcousTrans (Acousmatic Translator) allows a user to load in a source stereo audio file (field recording or other environmental recording) and a destination corpus of audio files and interactively map the events, gestures, and structure of the source onto the destination. What results is a stereo or multi-channel audio file with gestural, rhythmic, and/or structural similarities to the source file, but with entirely different timbral characteristics: those of the destination corpus.

After a processes of filtering and segmentation, the acoustic features embedded in each source event may be used to select a similar sound within a user-selected destination corpus via concatenative sound synthesis. Using a k-nearest neighbors search algorithm on a k-dimensional tree constructed from the acoustic features of segments of each audio file in the audio file corpus, the subvector of acoustic features for a source event is mapped to the most similar sound within the destination corpus. You can also customize the weighting of the features, which can be useful to “tune” the system depending on the particular source and destination sounds (for example, de-emphasizing fundamental frequency estimation if only using sounds with no clear pitch center)