Hidden Agents, Manifested

Hidden Agents, Manifested
Working on Floog Dance

With the arrival of Data Knot as a Max package, and the end of my stewardship of a Moog that a colleague loaned me, I was motivated to add just one more track, Floog Dance, for my Moog-album Fracas*. It also gave me a chance to experiment with various generative agents working with machine listening and learning. It's more of a pop-music piece, with a beat, but I think worth mentioning how I am using machine learning and listening in this context.

My go-to lately has been to have a hidden, overarching (I don't want to say controlling) voice/agent that brings a work together in terms of pitch and rhythm. In this piece, I had 2 agents, a recorded Moog improv tuned to Partch43 (below referred to as the Partch Moog), and a recorded (digital) drum set improv. The Moog ended up not being entirely hidden, at times surfacing. But both agents were constantly there, providing pitch and rhythm behind the scene for the other voices.

In works that don't have a live performer (such as this quasi– pop song) there is a lot of performance that precedes the final mixing process. Since both the tuning systems I used are based in B-flat, the polytonality kind of fits. As a flutist, 43 notes per octave are tricky, but I arranged it so that the 43-note tuning system only chooses 12 notes per octave, which include tolerable variants of the intonation of B-flat, E-flat, A-flat and F - notes that i can easily reproduce on the bass flute. All of the materials in this piece come from my own improvisations, then putting them together to record a "performance" version of the piece (manipulating the improv recordings and the concatenative synths and corpus players via midi controller), and then trimming and mixing that performance.

So where does the machine learning/listening come in?

The (recorded) Partch Moog performance went into two concatenative synth players (dk.concatsynth), one with a corpus of my bass flute playing (improv in B-flat pentatonic minor), the other of Moog sounds. Real-time descriptors analyze the recorded Partch Moog as it plays on a loop. The corpera were pre-analyzed and their data (descriptors) fit to a kdtree. The incoming sound from the Partch Moog is then transferred via descriptor matching to the corpera of my flute and Moog noises. The corpera then "play along" with the Partch Moog (which I keep silent most of the time).

With the rhythm it is similar. The (recorded) drum set improvisation (digital, with a drum rack from Ableton based on Max-for-live Poli) went through a filter into two corpus players (dk.corpusplayer). Like the concatenative synth player, the corpus player also maps real-time incoming descriptors with kdtree, but uses onset detection instead of a continuous stream of live descriptors. This is better for percussion and short, discrete sounds. The less bright sounds went into a corpus made with an improvisation of the same drum set, but much wilder, the bright sounds went into a corpus of another Moog noise (recorded) performance. It might seem strange that I fed a corpus some of its own sounds, but with the filter in place replacing the bright sounds with Moog sounds, it did not make an exact replication of itself. (The corpus had more variegated, shuffled sounds than the original.)

To get the drive of the bass, I used a modified KS patch I gleaned from Taylor and Wakefield. Taking the beats from the (recorded) drum set performance, I drove a sequencer to produce the bass sounds. This I used with a very pared-down version of dk.sequencer - using only its objects that record incoming beats, time them, scramble them, and send them out as triggers. I did something similar for the pitch, taking the last 12 pitches from the Partch Moog, scrambling them, and putting them to the sequencer beat.

*I publish pop-music sounding stuff and max-for-live devices under the name Affe Zwei