Winter on Georgian Bay

Back in late December, I planted a Raspberry Pi camera at a cottage on Georgian Bay, in Northern Ontario, set to take a picture once every two minutes.  I had been planning the shoot for a couple months prior to the deployment: There were two Raspberry Pi’s involved, in case one failed somewhere during the winter.  One of the Pi’s was set to reboot once a week, just in case the software crashed but the Pi was still awake.  I had also written some software for the time-lapse to ensure that pictures were only taken during the day time, and to try to maintain a balance of well-lit, consistent images over the course of each day.

In spite of all the planning, I had a sense that something would go horribly wrong, and, indeed, when we showed up to the cottage, the windows were completely frosted over.  The cameras had to be placed inside, so we figured we would mainly see the back-side of an icy window when we retrieved the cameras.  Or that the camera boards would literally freeze after about a week of sub-zero temperatures in the unheated cottage.  Or that a raccoon would find it way in and gnaw through the shiny Lego cases.  Or something else entirely unplanned for.

So it was a bit of a surprise when it turned out that the shoot went perfectly.  We retrieved the cameras about a week ago, on May 7th, and found over 42,000 photos waiting for us on one of the cameras and somewhat fewer on the other.  Both cameras had survived the winter just fine!

All told, I think the result was really cool!  The video at the top is the ‘highlights’ reel, with all of the best days.  It comes to 13 minutes at 18 frames per second.  Turns out it was a fantastic winter for doing a time-lapse, with lots of snow storms and ice.  There’s even the occasional bit of wildlife, if you watch closely.  I’ll post the full 40-minute time-lapse on Youtube sometime next week.

Continue reading

Roshambo Part III – Representation Theory

In the last two posts, we’ve looked at using machine learning for playing iterated Roshambo.  Specifically, we saw how to use Bayes’ theorem to try to detect and exploit patterns, and then saw how Fourier transforms can give us a concrete measurement of the randomness (and non-randomness) in our opponent’s play.  Today’s post is about how we can use representation theory to improve our chances of finding interesting patterns.

Niels Henrik Abel, for whom 'Abelian groups' are named.
Niels Henrik Abel, for whom ‘Abelian groups’ are named.  These are groups where xy=yx for any x, y.

Continue reading

Roshambo Part II – Fourier Analysis

In the last post, we looked at using an algorithm suggested by Bayes’ Theorem to learn patterns in an opponent’s play and exploit them.  The game we’re playing is iterated rock-paper-scissors, with 1000 rounds of play per game.  The opponent’s moves are a string of choices, ‘r’, ‘p’, or ‘s’, and if we can predict what they will play, we’ll be able to beat them.  In trying to discover patterns automatically we’ll gain some general knowledge about detecting patterns in streams of characters, which has interesting applications ranging from biology (imagine ‘GATC’ instead of ‘rps’) to cryptography.

Fourier analysis is helpful in a wide variety of domains, ranging from music to image encoding.  A great example suggested by ‘Building Machine Learning Algorithms with Python‘ is classifying pieces of music by genre.  If we’re given a wave-form of a piece of music, automatically detecting its genre is difficult.  But applying the Fourier transform breaks the music up into its component frequencies, which turn out to be quite useful in determining whether a song is (say) classical or metal.

This goat recognizes and apparently enjoys metal.  Possibly using a furrier transform...
This goat recognizes and apparently enjoys metal. Possibly using a furrier transform… (sorry.)

Continue reading

Thoughts on Machine Learning, Representation Theory, and RoShamBo – Part I

I’ve recently been doing some reading on machine learning with a mind towards applying some of my prior knowledge of representation theory.  The initial realization that representation theory might have some interesting applications in machine learning came from discussions with Chris Olah at the Toronto HackLab a couple months ago; you can get interesting new insights by exploring new spaces!  Over winter break I’ve been reading Bishop’s ‘Pattern Recognition and Machine Learning‘ (slowly), alongside faster reads like ‘Building Machine Learning Systems with Python.‘  As I’ve read, I’ve realized that there is plenty of room for introducing group theory into machine learning in interesting ways.  (Note: This is the first of a few posts on this topic.)

There’s a strong tradition of statisticians using group theory, perhaps most famously Persi Diaconis, who used representation theory of the symmetric group to find the mixing time for card shuffling.  His notes ‘Group Representations in Probability and Statistics‘ are an excellent place to pick up the background material with a strong eye towards applications.  Over the next few posts I’ll make a case for the use of representation theory in machine learning, emphasizing automatic factor selection and Bayesian methods.

First, an extremely brief overview of what machine learning is about, and an introduction to using a Bayesian approach to play RoShamBo, or rock paper scissors.  In the second post, I’ll motivate the viewpoint of representation by exploring the Fourier transform and how to use it to beat repetitive opponents.  Finally, in the third post I’ll look at how we can use representations to select factors for our Bayesian algorithm by examining likelihood functions as functions on a group.

RPS25 is a generalized form of Rock Paper Scissors.  I'll leave generalizations of this post to the RPS25 case as an exercise to the reader...
RPS25 is a generalized form of Rock Paper Scissors. I’ll leave generalizations of this post to the RPS25 case as an exercise to the reader…

Continue reading