Today's algorithmic composition tutorial looks at manipulating a tone row in Max and PureData to generate musical material. We'll also have a look at one technique that's useful in generating more fully formed compositions in Pd and Max than some of the musical sketches we've generated so far.
Jump to the end of the post to hear some sample algorithmic music output from this patch.
As with yesterday's OpenMusic tutorial we're using the tone row from Berg's Violin Concerto:
G, Bb, D, F#, A, C, E, G#, B, C#, Eb, F
You can use any tone row of your choosing. To start with we'll define our tone row in a table in PureData
In PureData the tabread object is used to read the table data:
To start with, we'll add in some rhythmic variety. If you want the rhythms to fit a strict metre it's best to define the possible rhythms in a table. A previous algorithmic composition tutorial looks at working with rhythm in this way. We'll allow the tempo to be freer by not sticking to a strict metre and using random rest and note onsets.
Here we've added in a 50/50 chance of each metro count triggering a note of leaving a rest. If random outputs a 0 the select object matches this advancing the counter, triggering the note and choosing a new random metro time. If random outputs anything other than a 0 a new metro time will be chosen but a new note won't be triggered. You can change the probability of there being a note or rest by changing the value in the first random box. In PureData the patch now looks like this:
Next we'll add in the possibility of playing through the tone row harmonically. Adding in a further random object that chooses how many of the next tone row pitches to play at a time, in this case from 1 to 4. This random object feeds into an uzi object in Max and an until object in PureData. We've also added a further random object that will choose a random transposition of the tone row each play through:
In PureData (the blue selected objects are the new ones to add):
At the moment all of the pitches occur in the same octave. When many notes play at the same time this gives very close voicings. As each of the pitch classes from the tone row can occur in any octave, we'll add in a random octave amount to each note. Add this subpatch to your main patch in Puredata:
and make the p octave subpatch look like this:
And the p octave subpatch should look like this:
Now we have a patch that can generate music from a tone row and have control over a number of different musical parameters, we can change:
- the maximum number of notes that sound at a time by altering the random object that is input to uzi or until in PureData
- the min and max velocity by altering our velocity subpatch
- the probability of notes or rests occurring by changing the value of the first random object
- the octave distribution by altering our octave subpatch
This gives us a good range of musical control to allow us to develop the light and shade of a composition from say sparse high notes, to dense fast closely spaced chords. Rather than editing the patch as we go, we can get more control by using sends and receives.
Here in PureData I've put the whole of out patch so far in a 'pd tone-row' subpatch and have used a metro object connected to a counter to keep time. This feeds into a select object as each specified time is reached a range of values are sent out to our patch. For example at 1 second the subpatch begins to play with 1 voice, the note density is set to 5 (high probability of rests over notes) and a small octave range is used. As each time interval is reached the parts generated become busier, until returning to a sparse melody to finish before the patch is turned off at 60 seconds:
In Max the procedure is very similar with a metro outputting pulses every second, a counter outputting the passing seconds and a select triggering the various messages:
You may have noticed that a few extra receives have been used in the Max and PureData patches:
r t-factor multiplies the metro value allowing tempo changes
r legato multiplies the note length allowing staccato and legato effects
r p-min sets the minimum MIDI Pitch allowing the composition to change pitch range over time.
You can hear some sample output from the patch here:
Using a master metronome and counter together with sent messages allows you to change the type of musical material your patch over time. This is a good way of developing your patches from musical sketches into more fully formed compositions and thinking about the meta-composition larger structures and flow.
If serialism isn't your thing similar approaches can be taken with more conventional harmonic material and metric structure.
After the getting the patch to work play around with your own settings and modifications and check back soon for another algorithmic composition tutorial!