One Weird Fourier Trick for Combinatorial Data

 

Cell phone photo from the scene of the crime
Cell phone photo from the scene of the crime

Here are the slides from the talk I gave in Montreal last Friday.

The talk was about using Fourier transforms to get polynomial-time encodings of permutation statistics, as well as a look at the Kondor-Borgwardt approach to graph invariants via the Fourier transform over S_n.  The talk was given at a representation theory conference, and I was making the point that we can get new research ideas by taking trips into the world of applications – in my case, by looking at machine learning problems.  The opening joke was that I asked my computer for the best possible title for the talk, and received the click-bait title as a response.  It was admittedly a pretty funny moment watching the chair of the session trying to decide whether to read the title originally submitted for the talk (‘Compressed Combinatorial Statistics’) or the ridiculousness on the screen (he went with the original).

Roshambo Part III – Representation Theory

In the last two posts, we’ve looked at using machine learning for playing iterated Roshambo.  Specifically, we saw how to use Bayes’ theorem to try to detect and exploit patterns, and then saw how Fourier transforms can give us a concrete measurement of the randomness (and non-randomness) in our opponent’s play.  Today’s post is about how we can use representation theory to improve our chances of finding interesting patterns.

Niels Henrik Abel, for whom 'Abelian groups' are named.
Niels Henrik Abel, for whom ‘Abelian groups’ are named.  These are groups where xy=yx for any x, y.

Continue reading

Roshambo Part II – Fourier Analysis

In the last post, we looked at using an algorithm suggested by Bayes’ Theorem to learn patterns in an opponent’s play and exploit them.  The game we’re playing is iterated rock-paper-scissors, with 1000 rounds of play per game.  The opponent’s moves are a string of choices, ‘r’, ‘p’, or ‘s’, and if we can predict what they will play, we’ll be able to beat them.  In trying to discover patterns automatically we’ll gain some general knowledge about detecting patterns in streams of characters, which has interesting applications ranging from biology (imagine ‘GATC’ instead of ‘rps’) to cryptography.

Fourier analysis is helpful in a wide variety of domains, ranging from music to image encoding.  A great example suggested by ‘Building Machine Learning Algorithms with Python‘ is classifying pieces of music by genre.  If we’re given a wave-form of a piece of music, automatically detecting its genre is difficult.  But applying the Fourier transform breaks the music up into its component frequencies, which turn out to be quite useful in determining whether a song is (say) classical or metal.

This goat recognizes and apparently enjoys metal.  Possibly using a furrier transform...
This goat recognizes and apparently enjoys metal. Possibly using a furrier transform… (sorry.)

Continue reading

Thoughts on Machine Learning, Representation Theory, and RoShamBo – Part I

I’ve recently been doing some reading on machine learning with a mind towards applying some of my prior knowledge of representation theory.  The initial realization that representation theory might have some interesting applications in machine learning came from discussions with Chris Olah at the Toronto HackLab a couple months ago; you can get interesting new insights by exploring new spaces!  Over winter break I’ve been reading Bishop’s ‘Pattern Recognition and Machine Learning‘ (slowly), alongside faster reads like ‘Building Machine Learning Systems with Python.‘  As I’ve read, I’ve realized that there is plenty of room for introducing group theory into machine learning in interesting ways.  (Note: This is the first of a few posts on this topic.)

There’s a strong tradition of statisticians using group theory, perhaps most famously Persi Diaconis, who used representation theory of the symmetric group to find the mixing time for card shuffling.  His notes ‘Group Representations in Probability and Statistics‘ are an excellent place to pick up the background material with a strong eye towards applications.  Over the next few posts I’ll make a case for the use of representation theory in machine learning, emphasizing automatic factor selection and Bayesian methods.

First, an extremely brief overview of what machine learning is about, and an introduction to using a Bayesian approach to play RoShamBo, or rock paper scissors.  In the second post, I’ll motivate the viewpoint of representation by exploring the Fourier transform and how to use it to beat repetitive opponents.  Finally, in the third post I’ll look at how we can use representations to select factors for our Bayesian algorithm by examining likelihood functions as functions on a group.

RPS25 is a generalized form of Rock Paper Scissors.  I'll leave generalizations of this post to the RPS25 case as an exercise to the reader...
RPS25 is a generalized form of Rock Paper Scissors. I’ll leave generalizations of this post to the RPS25 case as an exercise to the reader…

Continue reading

An Evening at Hacklab.to

IMG_20131022_223117
A 3D printer at HackLab printing a model of a jet engine turbine.

On Tuesday evening I finally made it to the Toronto Hacklab, after meaning to make it over for weeks!

It’s a really interesting space.  There are currently five 3D printers, lots of tools for playing with electronics, and a giant computer-driven laser in the bathroom for etching and cutting plastic or acrylic.  (Actually, I should try to make an acrylic LakeHub logo to send back to Kenya….)  According to Eric, who gave me a tour of the space, about a two thirds of the members come through to work on these sorts of projects, and a third are mainly software people who use it as a common work space.

A couple of my favorite toys that I saw on the tour were a giant LED pixel board and a ‘flip dot board’, both of which had been salvaged from the Toronto Transit Commission, which runs all the buses and subways.  Members of the Hacklab built electronic interfaces to both of the devices: the LED board is run by an arduion and can be sent messages to display, and otherwise acts a clock.  The flip-dot board looks like it’s run by some custom microcontrollers, and is hooked up to a joystick for playing ‘snake.’

Continue reading

Sage for Specht Modules

Appropos of nothing, these were some cute monkeys playing on the water tank this morning. I actually got a very close look at them, but by the time the camera was out, they were scurrying onto the roof.

As the strike drags on, I’ve had some time to actually do some maths.  In particular, I’m preparing to run a weekly seminar on using representation theory for certain statistical problems.  The plan is to work from some old lecture notes by Persi Diaconis entitled ‘Group Representations in Probability and Statistics.’ These deal with, for example, how long one should apply a random shuffling process before the thing which is being shuffled is well-mixed.  Particular examples include the question of how many times one should shuffle a deck of cards, and how long one should let a random walk on Z_n run before we can be reasonably sure that every point has been reached.  There are numerous real-world applications of the results, and it uses a lot of first-rate representation theory along the way!

Continue reading