Birdsong is a website that plays back a subset of real-time tweets in a given area, performs basic sentiment analysis on them via an API and plots them on a map while playing a tune depending on the mood of the tweet. It was made by Freddy (freddinator on Github) and myself for the YRS 2014 event.
How does it work?
My job was working on the frontend side of things, so I turned our PHP output into pins on a map and music. The pins on a map bit was mostly basic parts of Mapbox's functionality, but the music generation was more interesting so I'll touch on that.
To generate the music, we start in a key and move around the Circle of Fifths randomly. This allows us to easily shift key without causing too much of a jump in the music. We change key every time we encounter a space character in a direction dependent on the previous character.
Before playing notes, we split our tweet up into words. We split the allocated time (each tweet takes a second to play) equally between each word and equally between the letters of each word, so shorter words will have longer tones for each letter.
Once we have a word, we parse it. Rather than randomly choosing like in some procedural generation, everything in this relies on the code of each character of the tweet. To decide where in the scale of our key we jump when playing a note, we pass in the character code and change position based on it (mod 18) - e.g. an "a" might mean going one step up our scale. We do this for each letter, keeping track of where in the scale we are so that it doesn't jump too far throughout the scale.
All of this can be seen in the
song.js file in the repo linked below.