Music Posts

Colour Flip

posted in Music, Love2d, Game at 12 September 2017

For a game jam hosted by the Guildford Game Jammers at Media Molecule, we made a game called Colour Flip!

look it's colour flip

Made by Kirsty and I, this is a local multiplayer game about capturing a flag without getting flipped onto the wrong colour. Each player can only travel on their colour. Every player has the ability to flip the colours in their row, column, or nearby to make a path they can travel on.

It features team selection, joystick integration (well, works with PS3 controllers for certain), music by me, art by me (as you can tell, I'm not that great at art), level selection (made after the jam by Kirsty) and a bunch of other features which you can find out by playing the game!

Markov Chains Music

posted in Python, Music, Generative at 03 September 2017

For my Extended Project Qualification, I decided to look into using Markov Chains to generate music, with surprisingly okay results. For this I use the music21 Python library to import, analyse and export musicXML. This was finished in November 2015.

Example Output

Markov Chains

A Markov Chain is essentially a probability distribution that introduces state. To give an example, let's consider a naive music generator that plays a random note with some given probability. It'll play what is essentially a random string of notes, like hitting a keyboard with your fist, hitting some good and some bad notes. The important note to take away from that thought experiment is that there won't be a flowing tune at all - each note is independent of the last. If you try to alleviate this problem by massively weighting our selection to the best notes, you'll end up with something like the head of C Jam Blues - which is a perfectly fine tune, but if I wanted a computer to generate music, I'd quite like it to be able to generate more complex music than just a rehash of C Jam Blues every time.

A good way to make music flow is by considering the previous notes in a phrase, as notes are evidently not composed independently of each other. Otherwise composition would be as easy as rolling some dice and putting dots on a page! To introduce this phrasing, we consider what note we are currently playing when writing the next. Rather than having an absolute list of probabilities (e.g. there's a 0.5 chance of playing a C, 0.1 of a D...) we now have a transition matrix, where each row adds up to 1. When picking our next note, we look at the row of our current note for the probabilities we should use.

x   C   E   G
C | 0.1 0.5 0.4
E | 0.2 0.3 0.5
G | 0.3 0.5 0.2

So if our first note was C, there is a 0.1 probability that we choose a C to be our next note.

We can expand this to look at multiple previous notes in a higher-order Markov Chain, where we consider the previous few notes as well as the current one. An order N Markov Chain is defined as a Markov Chain considering N inputs total. This gives us a row consisting of multiple previous notes, e.g. in this order 2 transition matrix:

x    C   E   G
CC | 0.1 0.5 0.4
CE | 0.2 0.3 0.5
CG | 0.3 0.5 0.2
...| ... ... ...

This allows us to improve the phrasing of our music even more. However, this comes with a caveat which will be explained as we begin to look into how we determine these probabilities.

Training our Markov Chains

To train our Markov Chains, we must observe what real music does and get probabilities similar to above. We do this by taking training melodies, transposing them so that they are all in the same key, and going through each note and recording the frequency of notes directly after it. First, we convert our notes to numbers, where 1 is the note of the key, 2 is the note above that and so on. Then we construct a 12x12 matrix to fill with frequencies, and initialise every value to zero.

Now we go through each training melody. For each note, on the row of the current note, we record what the next note is by increasing the number at that entry by 1. We repeat this for every note in every melody, and then divide each row by the total of it to give probabilities that sum to 1.

We do the same with rhythm, except we look at lengths of note rather than pitches. In my program, rather than considering a note-by-note basis, we instead record full beats of rhythm and change between those. This unfortunately detaches the rhythm from the melody, making less pleasing music.

Actually Generating Music

This is the easiest bit. Start at a note, then use our Markov Chain to generate each note after it. Do the same for rhythm and map the pitches to notes. Then you're done. Just choose a key, turn your numbers into notes from that key, generate a title (my program does this by applying Markov Chains of order 4 to song titles from The Beatles) and output it as musicXML.

After experimentation, I decided to use an order 2 Markov Chain for the melody generation. The problem with higher-order Markov Chains is that you need to have lots of input data because otherwise you will end up with verbatim repeats of other tunes as many-note sequences are less likely to appear in multiple tunes than shorter ones so the program essentially chooses one of the few options and follows it.

Extras in the program

The program implements "Tintinnabuli", a method which makes a bassline from the melody. More can be read about it here.


posted in Web, Youngrewired, Music, Generative at 01 September 2017

Birdsong is a website that plays back a subset of real-time tweets in a given area, performs basic sentiment analysis on them via an API and plots them on a map while playing a tune depending on the mood of the tweet. It was made by Freddy (freddinator on Github) and myself for the YRS 2014 event.


How does it work?

Structure Diagram

The diagram is pretty self-explanatory. We have a Python harvester collect and write a subset of all tweets in a certain area to a .json file, which is read by PHP. The PHP script also queries the language analysis API (we used this one but in hindsight could have used Python's Natural Language Toolkit on our Python server and returns these tweets to our client-side Javascript. This Javascript uses Mapbox and the WebAudio API to plot our tweets and make pretty tunes for them.

My job was working on the frontend side of things, so I turned our PHP output into pins on a map and music. The pins on a map bit was mostly basic parts of Mapbox's functionality, but the music generation was more interesting so I'll touch on that.

Music Generation

To generate the music, we start in a key and move around the Circle of Fifths randomly. This allows us to easily shift key without causing too much of a jump in the music. We change key every time we encounter a space character in a direction dependent on the previous character.

Before playing notes, we split our tweet up into words. We split the allocated time (each tweet takes a second to play) equally between each word and equally between the letters of each word, so shorter words will have longer tones for each letter.

Once we have a word, we parse it. Rather than randomly choosing like in some procedural generation, everything in this relies on the code of each character of the tweet. To decide where in the scale of our key we jump when playing a note, we pass in the character code and change position based on it (mod 18) - e.g. an "a" might mean going one step up our scale. We do this for each letter, keeping track of where in the scale we are so that it doesn't jump too far throughout the scale.

All of this can be seen in the song.js file in the repo linked below.

You Must Get The Atoms

posted in Game, Love2d, Music at 01 September 2017

You Must Get The Atoms is a puzzle game created in August 2013 during Ludum Dare 48 about collecting atoms, combining them to make compounds and using those to get through the levels (although if you play it you'll be able to tell that I don't have a chemistry background so some of the compounds do weird things, like CO conducts electricity? I think I confused C and Cu). The main mechanic involves swapping atoms you've collected around so they react with each other.

Playing it now I can see that the timer (there's a ten second timer for each level, as the theme of the game jam was "10 seconds") was very much bolted on and makes the game stupidly difficult, but it still includes some nice touches including a level editor, tutorial throughout the game and actual reasonable collision detection. I also did the music for it, which seems to get annoying after a while but I'm still proud of it.

You Must Get The Atoms screenshot

Song Of Sparks

posted in Love2d, Music, Game at 01 September 2017

Song Of Sparks is a game made for Mini Ludum Dare #43 in June 2013. In it, you create and move little sparks which float around the screen away from your cursor to create music. Each spark is an oscillator which plays a higher note the higher up it is on the screen. The sparks play in order from left to right, thus allowing you to create a sort-of melody by moving the sparks around. There's a story too. For this game I created the background music and did the audio programming that allowed the sparks to make the right pitches at the right time.

Song of Sparks