Back

Posts

Colour Flip

posted in Music, Love2d, Game at 12 September 2017

For a game jam hosted by the Guildford Game Jammers at Media Molecule, we made a game called Colour Flip!

look it's colour flip

Made by Kirsty and I, this is a local multiplayer game about capturing a flag without getting flipped onto the wrong colour. Each player can only travel on their colour. Every player has the ability to flip the colours in their row, column, or nearby to make a path they can travel on.

It features team selection, joystick integration (well, works with PS3 controllers for certain), music by me, art by me (as you can tell, I'm not that great at art), level selection (made after the jam by Kirsty) and a bunch of other features which you can find out by playing the game!

Mindstorms ev3 Football

posted in Python, Hardware at 08 September 2017

In my sixth form, I worked as part of a team to make a Mindstorms ev3 robot play football using ev3dev (a loadable Linux OS for the ev3) and Python.

Our Robot (sorry about the image quality, for some reason all of my images of it were missing so all I had was a video of it)

The Robot

DARWIN, our robot, consists of the following components. It has two independent motors and a caster for flexible movement, and a third motor to apply backspin to the ball so that it stays in the robot's grasp. It has a rotation sensor to detect whether it has been pushed off course and a colour sensor to look at the pitch and its lines to make sure it isn't going off the pitch. It also has an ultrasonic sensor to look for a goal and other robots, and an infrared (IR) sensor to look for the ball.

Our Strategy

This robot was designed to score goals rather than just defend, so with that in mind it has a simple plan: find the ball, grab the ball and then score a goal with it. This essentially means it has two states, which we'll get to later. While it doesn't have the ball, it uses the IR sensor to find out what direction the ball is in and moves towards it. If it can't find it at all, it just spins on the spot until it sees it.

The biggest problem we faced was that the IR sensor was unreliable at very short distances, causing it to produce very weird values suggesting that the robot should take a sharp right when the ball is right in front of it. To deal with this, the robot ignores values corresponding to very sharp turns and rather than directly acting on the value it receives keeps calculating the average of the last few readings. We also ignore the IR sensor completely once the robot has the ball, which is done by monitoring the speed of the backspin motor - if it has slowed slightly, we know the ball is likely underneath it so we have it.

To detect the goal, we use both our rotation and ultrasonic sensor. If we have the ball, are facing the right way and detect an object, we assume it is the goal and attempt to score. For collision detection, we turn left or right randomly to avoid the object, and then get back to our original path.

The Code

Because our robot essentially has a few states it can be in, Kirsty cleverly realised that our code could be structured in that way also. So, our main loop in the code is:

functions = {"moveToGoal": moveToGoal, "retreat": retreat, "shoot": shoot, "lookForBall": lookForBall, "realign": realign}
while True:
        state, ro = functions[state](ro)

Where ro is our Robot Object, which contains some key-value pairs of information. Every function returns a state and newly-modified object once done, so we have effortless switching between states.

These stats are organised as follows. Our main state is lookForBall, which in a surprising twist looks for the ball. It reads the IR sensor, keeps track of the average and turns in the relevant direction. This returns its current state unless it turns out it has now got the ball, then it moves to realign. Realign rotates the robot until it is facing the right way, then moves on to moveToGoal (or lookForBall if it loses the ball while rotating). In moveToGoal we move in a straight line and try to score (changing to the state shoot), changing to realign if we get knocked in the wrong direction. The state shoot is self-explanatory.

The final state that we've not covered is retreat, which is when it detects the white line surrounding the outside of the pitch. When it does this, it reverses, makes a quarter turn and then goes back to realign. This can be called while the robot is in moveToGoal. Illustrated, this state system looks like this:

State Diagram

Turns out if you can split a program up like this, it looks really nice.

Words With Friends Solver

posted in Python at 04 September 2017

For my A-level Computing project, I worked on a rudimentary Words With Friends solver. When emailed a screenshot from a particular size of phone (and given the letters to make a word from), it not only gives the best move (and may do so from a dictionary of commonly used words or a full dictionary), but also gives hints. Here's an example of the output it gives:

Hint #1: The best move is located in the top-left quadrant, attached to (or creating the word GI). If you're only looking at commonly used words, the best move is in the bottom-left quadrant, attached to (or creating) the word DO...

How does this work? It is split into three parts: image processing, email processing and the actual solver.

Processing an Image

The first thing the program does when receiving an email is take the screenshot and convert the grid shown in the screenshot to a Python array. This is the most rudimentary bit - in the program we have black and white images for every letter, so we simply compare predefined sections of the image to those and look for the closest fit. These predefined sections have been found by hand - as the aim was for this to only work with one screen size, everything will always be in the same place and the program can take advantage of that. Each tile of a certain letter is exactly the same, so by knowing what one of them for a certain screen size looks like, we know what all of them look like, which allows us to check pixel-by-pixel rather than using more advanced image processing techniques.

Solving a Grid

To do this, we will first consider the across words in our grid. This is done by taking each row in turn and determining what tiles can be placed there, if any. If a tile is already present, then it is certain we should leave that there, otherwise we work out what tiles will continue to make a valid word if placed on the grid in that square. This is possible as our dictionary is stored in a trie.

A trie is a tree with letters as edges. Our root is the empty word, and words are constructed by branching out from the root. For example, in our image below, inn can be found by going to i, then in, then inn.

Trie Example (source: Public Domain from Wikipedia)

In my solver, each root also has a flag stating whether this forms a complete word or not. Leaves are automatically complete words, however, we also have words such as "in" (which has "inn" as a child but is still a complete word). This trie structure allows us to work out whether we have a valid part of a word - if that node exists in the trie, completed or otherwise, it can be completed.

The actual algorithm is broken up into two steps. First we look through every square of the grid and mark it with tiles that could be there based on the tiles currently placed on the grid above and below that square. This is because we might end up creating new vertical words as we make horizontal ones, but these words must be complete and legal so only certain tiles can be placed.

Then to find the words, we go through each line. Start at a square, place one of the possibles in it (either a tile already on the board, otherwise one of the tiles we worked out from the previous step that could be played), then make a recursive call with the remaining tiles and new board state. Keep doing this, and whenever you score a word, add it to your word list. Once all the possibilities have been exhausted, score the words in your word list, and return the one with the highest score.

To do the same with down words, we simply flip the grid along the diagonal and repeat the above.

Hints and the Final Email

We then construct the final email using our two solutions (one for the normal dictionary, one for the most common words). This consists of two hints and the solutions separated out using a large amount of whitespace so that on a phone you have to scroll down. This means that users will only ever see the first hint unless they scroll down for more. These are constructed simply from the data in the solutions given - the first hint looks at the coordinates of the middle of the word to find the quadrant and attached word, and looks at the word itself for the second hint.

Finally, this email is sent off.

Markov Chains Music

posted in Python, Music, Generative at 03 September 2017

For my Extended Project Qualification, I decided to look into using Markov Chains to generate music, with surprisingly okay results. For this I use the music21 Python library to import, analyse and export musicXML. This was finished in November 2015.

Example Output

Markov Chains

A Markov Chain is essentially a probability distribution that introduces state. To give an example, let's consider a naive music generator that plays a random note with some given probability. It'll play what is essentially a random string of notes, like hitting a keyboard with your fist, hitting some good and some bad notes. The important note to take away from that thought experiment is that there won't be a flowing tune at all - each note is independent of the last. If you try to alleviate this problem by massively weighting our selection to the best notes, you'll end up with something like the head of C Jam Blues - which is a perfectly fine tune, but if I wanted a computer to generate music, I'd quite like it to be able to generate more complex music than just a rehash of C Jam Blues every time.

A good way to make music flow is by considering the previous notes in a phrase, as notes are evidently not composed independently of each other. Otherwise composition would be as easy as rolling some dice and putting dots on a page! To introduce this phrasing, we consider what note we are currently playing when writing the next. Rather than having an absolute list of probabilities (e.g. there's a 0.5 chance of playing a C, 0.1 of a D...) we now have a transition matrix, where each row adds up to 1. When picking our next note, we look at the row of our current note for the probabilities we should use.

x   C   E   G
C | 0.1 0.5 0.4
E | 0.2 0.3 0.5
G | 0.3 0.5 0.2

So if our first note was C, there is a 0.1 probability that we choose a C to be our next note.

We can expand this to look at multiple previous notes in a higher-order Markov Chain, where we consider the previous few notes as well as the current one. An order N Markov Chain is defined as a Markov Chain considering N inputs total. This gives us a row consisting of multiple previous notes, e.g. in this order 2 transition matrix:

x    C   E   G
CC | 0.1 0.5 0.4
CE | 0.2 0.3 0.5
CG | 0.3 0.5 0.2
...| ... ... ...

This allows us to improve the phrasing of our music even more. However, this comes with a caveat which will be explained as we begin to look into how we determine these probabilities.

Training our Markov Chains

To train our Markov Chains, we must observe what real music does and get probabilities similar to above. We do this by taking training melodies, transposing them so that they are all in the same key, and going through each note and recording the frequency of notes directly after it. First, we convert our notes to numbers, where 1 is the note of the key, 2 is the note above that and so on. Then we construct a 12x12 matrix to fill with frequencies, and initialise every value to zero.

Now we go through each training melody. For each note, on the row of the current note, we record what the next note is by increasing the number at that entry by 1. We repeat this for every note in every melody, and then divide each row by the total of it to give probabilities that sum to 1.

We do the same with rhythm, except we look at lengths of note rather than pitches. In my program, rather than considering a note-by-note basis, we instead record full beats of rhythm and change between those. This unfortunately detaches the rhythm from the melody, making less pleasing music.

Actually Generating Music

This is the easiest bit. Start at a note, then use our Markov Chain to generate each note after it. Do the same for rhythm and map the pitches to notes. Then you're done. Just choose a key, turn your numbers into notes from that key, generate a title (my program does this by applying Markov Chains of order 4 to song titles from The Beatles) and output it as musicXML.

After experimentation, I decided to use an order 2 Markov Chain for the melody generation. The problem with higher-order Markov Chains is that you need to have lots of input data because otherwise you will end up with verbatim repeats of other tunes as many-note sequences are less likely to appear in multiple tunes than shorter ones so the program essentially chooses one of the few options and follows it.

Extras in the program

The program implements "Tintinnabuli", a method which makes a bassline from the melody. More can be read about it here.

L-Systems

posted in Web, P5, Generative at 02 September 2017

GERBIL is an application for making your own pretty L-Systems made in December 2015.

GERBIL Screenshot

It was made by me and Kirsty in p5.js, which is essentially a Javascript version of Processing. It also doesn't use jQuery, instead using the raw Javascript options for DOM manipulation, although in hindsight using jQuery would have been much more sensible (at the time I knew very little about jQuery). Also in hindsight this could have been done with raw SVG processing rather than p5.js, but p5.js is really nice to use and still worked.

An Introduction to L-Systems

An L-System is a way of generating fractals using "replacement rules" - essentially string replacements - to increase the detail of a pattern infinitely. Each rule transforms a single letter into a string, like in the following example:

F: F-F++F-F
+: +
-: -

If we start with the string F, then we produce the following with each iteration (the brackets are there to separate the replacements out for easier reading):

F -> F-F++F-F -> (F-F++F-F)-(F-F++F-F)++(F-F++F-F)-(F-F++F-F) -> ...

Which generates the following pattern, if F is mapped to forwards and +,- to clockwise and anticlockwise turns respectively:

L-Systems Example

We can keep applying this to add further detail. Each symbol is mapped to one of a set of instructions: either move forwards some distance, backwards, rotate, push the current state to the stack, or pop the last state off the stack and move there.

Reading L-Systems

To read the given L-System, we first need to load the rules given. In our website we allow users to define as many rules as they want. To import these we sort them into their different types of symbol (whether it goes forwards, rotates etc) and store the [character, replacement] pairs.

To perform an iteration (set of replacements), we can just search through those pairs and apply the replacement. To draw our system's current state, we just go through the symbols, check which type of symbol it is (forwards, rotate...) and perform that operation.

The website also provides capability for saving your own L-Systems, which simply creates a set of key-value pairs that can be read and written.

Work at Tech Will Save Us

posted in Arduino, Web, Hardware at 01 September 2017

For most of the last few summers (2013-2016), I have been at Technology Will Save Us (a London-based tech startup that makes kits to teach children about electronics and programming) for summer work. Spending time there has let me work on some very cool stuff on some of their kits and their website, including:

The DIY Gamer

For the DIY Gamer Kit, I was able to work on part of the library (creating a font for printing words to the display, and working on the beginnings of its IR functionality) and also code the game Snake for it. I also worked on an implementation for the game Simon Says, providing code that was easy for a beginner to read and understand. Multiple versions of the code were provided so that people could work through each version, slowly adding features until they had the complete game. I then combined these games into one program that allows you to switch between these two games, Pong and Breakout.

I also prototyped a scoreboard for the Gamer, which had an IR receiver and three seven-segment displays so it could display the highest score it had received for each game.

Website Work

During one of the summers I worked on elements of the website, helping them shift it over from Wordpress to a Mustache templating system based setup. This involved rewriting pages to suit the new format. I also worked on their (previously live, not sure what happened to it) Meet The Team page, which involved writing Javascript to not load full animated GIFs. In the end I had to load the GIFs via Javascript. Previously I'd tried only loading the GIFs while hovered over, but this required giving users instructions about how to use the page which is a bad sign for something as simple as the Meet The Team page.

MicroBit Compiler

In the summer of 2016, I worked on a preliminary web-based compiler for the MicroBit kit for TWSU's online interface for programming it. This involved researching what compilers are available and discussing what language to translate the online interface into (the online programming environment is block-based, so would be translated into code before compilation). This also involved writing some Node.js to actually build this.

In the end I built two compilers, one which compiles Arduino code and one which attaches an interpreter to Python code.

Other Work

I did some work attempting to get a desktop application for programming Arduino boards from the web working on Windows, my favourite (read: not favourite) operating system. This involved working with formatting system paths and various options to get the Arduino npm module to even consider working.

I also got to experiment with compilation over audio and while I didn't get very far with it, I learned a lot about how Arduino code is compiled and controlling audio using the Web Audio API.

I also helped out at a few of their workshops helping people program and use some of TWSU's kits.

Only Connect

posted in Love2d, Hardware at 01 September 2017

Some backstory: one of my teachers at sixth form was an Only Connect contestant who thought that running our own Only Connect would be cool. So, I wrote software and created buzzers for our own version! This was made in February 2015, although at the time of writing this post I'm rewriting some of the code to prove I have improved and do know how to use a for loop.

Only Connect Software

The Software

The application is written LÖVE 2d, a library for making games in Lua, and also the only thing I knew how to make graphical applications in at the time. Whoops. Needless to say, for making GUIs, this is a mistake. But anyway, a few nice things came out of it. This application can run an entire Only Connect show including tiebreakers and can be controlled with just your keyboard rather than fancy special hardware (more on that later). It features nice animations, a way to read sets of questions from file and a host console that tells the host what the scores are and who should go next (which is just a bunch of print statements to console, nothing special there).

Animations

The animations are done by defining linear tweens that can be used throughout the code. A tween is stored as a Lua table containing its start value, end value, length and how much time has elapsed. When updated with updateTween(tween), this internal count of how much time has elapsed is updated. When getting the current value of the tween with val(tween), it just does start + (end-start)*(time/total_time), as you would expect from a linear transition.

These are used throughout the program to make things glide across the screen rather than teleport. It is also used to fade between colours, for example when a hieroglyph is selected, it fades to and from the new colour with three tweens - red, green and blue value.

Reading Questions from File

| round 2
Toy Story Movies~Toy Story;Toy Story 2;Toy Story 3;Toy Story 4 (Coming Soon!)
Rainbow~Red;Orange;Yellow;Green
Composite Numbers~20;21;22;24
*Grades In Some Rythmn Games~C (about 60%);B (about 70%);A (about 80%);S (about 90%+)
Noble Gases~Argon;Krypton;Xenon;Radon
Note Lengths~Crotchet;Minim;Semibreve;Breve
| round 3 #1
Types of Netrunner Card~Operation;Hardware;Program;Resource
Cold~Ice;Snow;Wind;Shoulder
Synonyms for Situation~Problem;Predic -ament;Situation;Event
Gifts~Present;Offering;Sacrifice;Delivery
| round 3 #2
Programming Languages~Go;C;Python;Java
Action Words~Stop;Wait;Run;Hop
Letters~B;A;D;E
Snakes~Cobra;Adder;Anaconda;Milk

Here is part of an Only Connect question file. In importer.lua, we import this as the questions by putting them in the relevant Lua tables and splitting up the clues. The format is pretty simple: | starts a comment, ~ splits a question and answer and ; separates answers. We also have * and $ prefixes which mark the picture and music rounds respectively, which look for the pictures and music files present in the folder shared with this questions file. The code for converting these is mostly just multiple applications of string.gmatch, so is uninteresting. But the format itself is quite nice.

Actually Displaying Things

This isn't very special, and mostly just involves drawing things at certain positions on screen. It does account for changing the size of the screen, however: a scale variable is defined based on the height of the window, and everything is multiplied by that scale. This meant that everything could be tested at one screen size and would automatically work for others. A second scale variable is also defined that works out how much wider it is, so that even if you just change the width and not the height, the elements will move to centre, such as on a particularly wide screen.

The Hardware

Only Connect Hardware

The hardware, despite looking like a mess of wires, is relatively simple. We have some arcade buttons attached to a MakeyMakey, which converts electrical input into keypresses. There are four total boxes - two with just a single button that acts as a buzzer for each team, one to house the MakeyMakey and one for the host. The host box has three buttons - one to say the team was correct, one to say the team was incorrect and one to move the question on (reveal new clues, move to the next question etc). The MakeyMakey could easily be replaced by an Arduino, but that would require an Arduino capable of simulating keypresses.

Birdsong

posted in Web, Youngrewired, Music, Generative at 01 September 2017

Birdsong is a website that plays back a subset of real-time tweets in a given area, performs basic sentiment analysis on them via an API and plots them on a map while playing a tune depending on the mood of the tweet. It was made by Freddy (freddinator on Github) and myself for the YRS 2014 event.

Birdsong

How does it work?

Structure Diagram

The diagram is pretty self-explanatory. We have a Python harvester collect and write a subset of all tweets in a certain area to a .json file, which is read by PHP. The PHP script also queries the language analysis API (we used this one but in hindsight could have used Python's Natural Language Toolkit on our Python server and returns these tweets to our client-side Javascript. This Javascript uses Mapbox and the WebAudio API to plot our tweets and make pretty tunes for them.

My job was working on the frontend side of things, so I turned our PHP output into pins on a map and music. The pins on a map bit was mostly basic parts of Mapbox's functionality, but the music generation was more interesting so I'll touch on that.

Music Generation

To generate the music, we start in a key and move around the Circle of Fifths randomly. This allows us to easily shift key without causing too much of a jump in the music. We change key every time we encounter a space character in a direction dependent on the previous character.

Before playing notes, we split our tweet up into words. We split the allocated time (each tweet takes a second to play) equally between each word and equally between the letters of each word, so shorter words will have longer tones for each letter.

Once we have a word, we parse it. Rather than randomly choosing like in some procedural generation, everything in this relies on the code of each character of the tweet. To decide where in the scale of our key we jump when playing a note, we pass in the character code and change position based on it (mod 18) - e.g. an "a" might mean going one step up our scale. We do this for each letter, keeping track of where in the scale we are so that it doesn't jump too far throughout the scale.

All of this can be seen in the song.js file in the repo linked below.

You Must Get The Atoms

posted in Game, Love2d, Music at 01 September 2017

You Must Get The Atoms is a puzzle game created in August 2013 during Ludum Dare 48 about collecting atoms, combining them to make compounds and using those to get through the levels (although if you play it you'll be able to tell that I don't have a chemistry background so some of the compounds do weird things, like CO conducts electricity? I think I confused C and Cu). The main mechanic involves swapping atoms you've collected around so they react with each other.

Playing it now I can see that the timer (there's a ten second timer for each level, as the theme of the game jam was "10 seconds") was very much bolted on and makes the game stupidly difficult, but it still includes some nice touches including a level editor, tutorial throughout the game and actual reasonable collision detection. I also did the music for it, which seems to get annoying after a while but I'm still proud of it.

You Must Get The Atoms screenshot

Song Of Sparks

posted in Love2d, Music, Game at 01 September 2017

Song Of Sparks is a game made for Mini Ludum Dare #43 in June 2013. In it, you create and move little sparks which float around the screen away from your cursor to create music. Each spark is an oscillator which plays a higher note the higher up it is on the screen. The sparks play in order from left to right, thus allowing you to create a sort-of melody by moving the sparks around. There's a story too. For this game I created the background music and did the audio programming that allowed the sparks to make the right pitches at the right time.

Song of Sparks