Showing posts with label pd. Show all posts
Showing posts with label pd. Show all posts

Tuesday, 12 January 2016

Sonification - Algorithmic Composition


Today's algorithmic composition tutorial uses sonification as a composition tool. Sonification uses data that is typically not musical and involves remapping this to musical parameters to create a composition. 

Sonification can be used to hear information in a set of data that might be otherwise difficult to perceive, common examples include Geiger counters, sonar and medical monitoring [ECG]. When creating sonification algorithmic compositions we are interested in creating interesting or aesthetically pleasing sounds and music by mapping non-musical data directly to musical parameters.

Here is some example output:


The Sonification Process:
There are four simple steps involved in creating a sonification composition.
1.            Find some interesting data
2.            Decide which musical parameters you want to map the data to
3.            Fit the input data to the correct range for your chosen musical parameters (normalise)
4.            Output remapped data to MIDI synths, audio devices etc

Step 1 – Find an interesting set of data
First of all we need to source a data set. You can use any data you like stock markets, global temperatures, population changes, census data, economic information, record sales, server activity records, sports data - any set of numbers you can lay your hands on, here are some possible sources: Research portal, JASA data, UK govt data
 
It's important to select your source data carefully. Data that has discernable patterns or interesting contours works particular well for sonifications. Data that is largely static and unchanging will not be interesting, similarly data that is noise-like is usually of limited use.

Here we've loaded some example data into a coll object in PureData

And similarly in Max:

algorithmic composition maxmsp sonification

Step 2 - Map Data to Musical Parameters
The second step is the most creative of the sonification process. It involves making creative decisions about which musical parameters to map the data to e.g. pitch, rhythm, timbre and so on. 

This is more of an involved question than it initially appears, if we choose to map our data to pitch, we also have to choose how these pitches will be represented:
 
Frequency (20Hz - 20kHz): this is useful for direct control of synths, filter cutoffs etc, but is not intuitively musical and typically needs conversion if working with common musical scales
MIDI notes (0 - 127): assumes 12 note division of the octave, no representation of C# versus Db
MIDI cents
•    OpenMusic uses a MIDIcents representation. This is equivalent to MIDI notes * 100 so middle c is 6000
•    This enables microtonal music and alternative divisions of the octave, for example dividing the octave into 17.
Pitchclass
•  octaves are equivalent
Scale degree
•    Assumes use of a scale and returns the degree of that scale.
•    This is useful as we can easily deal with uneven steps in the scale simply

Step 3 – normalise data to fit musical parameters
In Max we can use the scale object. Scale maps an input range of float or integer values to an output range. The number is converted according to the following expression
 y = b e-a c x  c

where x is the input, y is the output, a, b, and c are the three typed-in arguments, and e is the base of the natural logarithm (approximately 2.718282).


expr $f4 + ($f1 - $f2) * ($f5 - $f4) / ($f3 - $f2 )
1 = number
2 = input min
3 = input max
4 = output min
5 = output max

Step 4 – Output
Adding in some MIDI objects allows us to hear our data sonified in a simple way:
algorithmic composition sonification maxmspAnd in PureData adding in a makenote and noteout objects and normalising our output data to one octave of a chromatic scale looks like this.sonification puredata algorithmic composition
You should now have some basic musical output to your MIDI synth. Now the patch is setup it’s easy to experiment with mapping the data to different pitch ranges. For example:
  • Try adjusting the normalisation (the scale object in Max or expr object in PureData) to map the data across two octaves instead of one by changing the output range from 12 to 24 – or any other pitch range.
  • The + 60 object sets our lowest pitch as MIDI note 60, this can be modified easily to set a different pitch range.
  • Invert the data range by having a high output minimum and a lower maximum, so ascending data creates descending melodic lines.
Mapping to Scales
As an alternative to mapping our data to chromatic pitches we can use a different pitch representation and map our scale to the notes of a diatonic scale.
First we need to define a few scales in a table in Max:
sonfication algorithmic composition maxmsp
As the contents of tables are defined slightly differently in PureData this looks like this:
sonification algorithmic composition puredata
The above screenshots show the scale intervals for a Major, Harmonic Minor, Melodic Minor and Natural Minor scale being stored in a separate table for each scale. These are stored as pitchclasses, if we need to transpose or modulate our composition to another key we can add or subtrack to these scale notes before the makenote object e.g. to transpose up a tone to D add a + 2 object. Now rather than mapping to chromatic pitches we’ll map to scale pitches so we’ll need to modify our normalisation to reflect this.
sonification puredata algorithmic composition
Here we’ve changed the output range to be 0 to 6 to reflect the seven different scale degrees of the major scale. Similarly in Max:
sonification maxmsp algorithmic composition
We are now mapping our octave to a major scale, as we have already stored a number of scales you could try changing the name of the table that is being looked up to map to an alternative scale e.g. table harmonic-minor-scale.
We can also map the data to a scale over more than one octave.
sonification algorithmic composition puredata
Here we’ve changed the output range in this example to be 0 to 20. Using % (modulo) to give us the individual scale degrees and / (divide) to give us the octave. The process is the same in Max, although in the previous example we mapped across 3 octaves of the scale there’s no requirement to map to full octaves, you could map your data to 2 1/2 octaves or any other pitch range by changing the output values of the scale object (expr in PureData):
sonification algorithmic composition maxmsp
So far we have mapped our data to MIDI pitches. The next example maps the data to frequencies and uses this to control the cutoff frequency of a band pass filter that is fed with noise. This gives a sweeping windsound effect.
As an alternative to MIDI output we’ll map our wind data to the filter cutoff of a bandpass filter that is fed with noise. As we’re know working with frequencies rather than MIDI notes we’ve changed the output of the scale object to remap any incoming data between 200Hz and 1200Hz:
algorithmic composition maxmsp sonification
The patch is setup in a very similar way in PureData with a bp~ object as our band pass filter, rather than the reson~ object found in MaxMSP.
puredata algorithmic composition sonification
Although all of the notes are created by mapping the data directly to pitches we have made another of creative decisions along the way, so there are many ways of realising different sonifications of the same source data. So far we have only mapped to pitch however we still have a number of variables we can alter:
scale – chromatic, major, melodic minor, natural minor, harmonic minor (any other scales can be defined easily)
base pitch (the lowest pitch )
pitch range (range above our lowest pitch e.g. 2 octaves)
the data set used to map to pitches
As with any composition we also have to make musical decisions concerning which instrument plays when, timbres and instrumentation, dynamics, tempo etc. In another post we’ll look at sonifying these elements from data but for now we’ll make these choices based on aesthetic decisions.
Summary
In the youtube example we have added several copies of the sonification patch so we have one for each of our data sets (temperature, sunshine, rainfall and windspeed). The interesting thing about using weather data is that we should hear some relationship between the four sets of data.
We’ve also added a score subpatch with sends and receives to turn on and off each section and control the variables mentioned above (min pitch, pitch range etc).
After the getting the patch to work play around with your own settings and modifications and check out part two of this sonification algorithmic composition tutorial. Future posts will continue the idea of mapping and explore sonifying rhythm, timbre and other musical parameters. Have fun and feel free to post links to your sonification compositions below.
Post a comment if you’ve any questions on this patch.

Automatic Breakbeat Generator -PureData

In this post we’ll create an automatic Breakbeat cutter that will play randomised selections from a sampled drum loop. We’ll also use this together with a Markov melody generator. You can hear some sample algorithmic composition output in this example and download the patch at the end of the post:

A Quick Recap
We have used objects, messages, numbers, buttons and toggles. All of these (and more) can be inserted by double clicking in a blank space or using the keyboard shortcut.
pd recap
Breakbeat Generator
We’ll now start building the breakbeat generator.
We’re going to create an automatic breakbeat generator patch. This will patch will chop up drum loops. This patch will create a breakbeat effect by chopping a sample into 8 and rather than playing straight through in order, playing:
  • 3 consecutive parts chosen from a random start point.
  • 3 consecutive parts chosen from a random start point.
  • 2 consecutive parts chosen from a random start point.
e.g. from the 8 sections of our loop, some 3+3+2 examples could include:
  • 2 3 4 / 6 7 8 / 3 4
  • 5 6 7 / 1 2 3 / 5 6
  • 4 5 6 / 2 3 4 / 6 7
  • 6 7 8 / 5 6 7 / 2 3 etc
Create this patch. This will load, play and cutup our drum loop sample.
breakbeat-pd1
Note: ‘pd’ stands for subpatch in PureData. Double click on this object and we can edit what’s inside it. Add this to the inside of the subpatch:
breakbeat-pd2
breakbeat-pd3
The breakbeat generator and markov chain generator are both connected to a toggle so they can be started at the same time.
breakbeat-pd4
You can download the patch here. Patches for older posts will also be uploaded soon.

From Data to Music – Pd as a Sonification Tool for Algorithmic Composition

Music is a physical phenomena: we can hear and sometimes feel sound waves, we can look at printed scores and chord charts and hold CD’s but these contain only a representation of musical information. How we represent music and each of the many musical characteristics is an important decision for the algorithmic composer. When creating algorithmic music we have to make choices about how we will represent musical information. This in turns impacts how we think about that musical information and affects what we can and cannot do with it.

Today’s algorithmic composition tutorial explores some of these issues and our algorithmic composition looks again at using sonification – a mapping of non-musical data to musical parameters to create an algorithmic piece of music. The key to sonification is how the data is mapped to musical parameters so in this post we’re using the same data with a more flexible interface that allows you to experiment with how the data is mapped to musical parameters.

Here’s a quick video demo of the Algorithmic Composition Sonification tool in action:
Here’s a breakdown of each of the sections, you can also download the patch at the end of the post.

If you haven’t already it’s worth reading through the previous sonification post, but as a quick recap here are the four basic steps of the sonification process:

1.            Find some interesting data
2.            Decide which musical parameters you want to map the data to
3.            Fit the input data to the correct range for your chosen musical parameters (normalise)
4.            Output remapped data to MIDI synths, audio devices etc

Step 1 involves sourcing some interesting data.
Ideally the data you use should include some patterns as this tends to result in more satisfying compositions. In this patch we’re using the same weather data as in the previous algorithmic composition post, but in a future post we’ll include the facility to load up data from any .csv file.

Step 2, involves deciding how you will map your data to musical parameters e.g. pitches, frequencies, rhythms, dynamics, timbre etc. The example patch today allows you to experiment with different mappings ‘on the fly’ and instantly hear the result. You can then save the mappings you like as presets.

Step 3 involves scaling the input data to match the output range you want. For example in our data temperature ranges from 6.6c to 20.6c. If we wanted to map this to a range of MIDI notes we would need to rescale the data so that the output data fitted into the number range we wanted e.g. changing 6.6 and 20.6 to one octave of MIDI notes from middle C, MIDI note 60 to 72.

Step 4 involves connecting the rescaled numbers from our source data to an output of our choice, typically a synth, MIDI device or audio processor.

Choosing Musical Parameters Pitch
The pitch section of this sonification patch allows you to choose a scale that the data will be mapped to and a pitch range. In this screenshot the data has been mapped to 8 notes (one octave) of a major scale. Here four sliders allow you to set each part to a different pitch range.

sonification-algorithmic-composition-pitch-range

Changing the pitch range will keep the same contour shape as the original data but will map the data across a wider or narrower range. In this chart for example, the same set of data has been remapped to different ranges, although the contour follows the same shape as the original data, if mapped to pitch the melodies would span different pitch ranges.

sonification-algorithmic-composition-pitch-contours
Increasing the pitch range that the data is mapped to exaggerates the contours of the melody creating higher peaks and lower troughs, decreasing the pitch range will result in a melody with smaller intervallic steps.
sonification-algorithmic-composition-pitch-contours
This allows us to create many different musical examples from the same set of source data. The incoming data is normalised to the selected pitch range using an expression.
Sonification pd-pitch
It’s worth noting that when mapping to a scale that unless you’re mapping to a chromatic scale or whole-tone scale, each of the scale intervals are not equal (e.g. a major scale being constructed of semitone intervals 2, 2, 1, 2, 2, 2 1), this would slightly distort the interval steps present in the original data.
Scales are selected using the radio buttons.


The scales are stored in tables that are accessed by a tabread object.
pd scales
As well as mapping the data over a pitch range, we also need to decide the base pitch for each of our four musical parts. Using radio buttons we can choose the octave for each part individually and an overall transposition factor.

The selected base pitch is added to the scale note and added to a transposition number.
pd algorithmic composition
Rhythm Tempo Factor
The tempo factor controls the speed of each part. With a tempo factor of 1 the part will run at normal speed. At .5 it will run at double speed, at 2 it will be at half speed etc. The link_tempo/octave toggle allows you to link the tempo and octave so that faster parts will be played at higher octaves and slower parts at lower octaves. The tempo factor can be randomised.

The staccato/legato factor controls the note length in relation to the tempo. Lower values will give short staccato notes, higher values will give longer legato notes. The tempo is set here.

Dynamics: Random Velocity Range and Channel Mixer
In this example the MIDI velocities are randomised rather than mapped to the sonification data. The range of possible velocities is set using these sliders and can also be randomised.

The mixer offer a simple way of adjusting the relative level of each part, these levels can be randomised using the bang button.

MIDI program numbers for each part can be changed by scrolling or typing in the number boxes, the GM instrument name will then be shown in the corresponding symbol. The MIDI program number can be also be randomised.

The MIDI program names are stored in a coll object, entering a number looks up the appropriate index of the coll and this name is sent to the symbol.
pd-scales
Save and Recall Presets
Values for the whole patch can be stored and recalled as presets. Choose a preset number to save or recall and press the appropriate bang button.
pd-presets
You can download the PureData sonification patch here.
There will be a Max version posted shortly and a more extended version that allows you to easily load up your own data and remap it to different parameters. Check out the algorithmic composition forum to introduce yourself and ask any questions, check back soon and keep composing!