Finding Community

I attended a really nice talk by Arash Amini yesterday about detecting ‘communities’ in sparse graphs.  The basic problem is: In a big graph (like the Facebook graph, or the graph of scientific papers citations) you have clusters of friends/associates, and you want to pick out those clusters. Dr. Amini and his collaborators have been working on methods to solve this problem in particularly noisy cases.  The methods presented were developed for cases where you know the specific number of clusters you are seeking to find.  It was a fascinating talk, and I came away wondering if there are good methods to use when you aren’t sure of how many clusters there are.

communities
At the left is a randomly generated block-diagonal adjacency matrix (using a stochiastic block model). The center is the same matrix with the rows randomly shuffled; you can’t “see” that it has block diagonal structure. The right is the matrix with blocks recovered using the techniques outlined in this post. Note the noise in the lower-right corner; these correspond to individual, isolated vertices that ended up in their own clusters.

Continue reading

Advertisements

RoShamBo Part IV: Implementation

I’ve finally had a bit of time to finish up the code for the Rock-Paper-Scissors bot discussed in the previous posts; I’ve put the code in a GitHub repository here.  Feel free to file issues if you would like it to be a bit more user friendly.

Win percentage over time, Diaconis vs Switchbot.  As time increases, the win percentage settles in on 2/3's which is a natural limit for play against Switchbot.
Win percentage over time, Diaconis vs Switchbot. As time increases, the win percentage settles in on 2/3’s which is a natural limit for play against Switchbot.

The bot which uses the Fourier transform on move probabilities to search for profitable patterns is named `diaconis`, after Persi Diaconis. It’s currently working just fine, but is a bit slow computationally. On startup, it generates all possible move patterns that it will investigate during the course of each game, as well as some character tables. After each play, it tests some of these patterns, and tries to determine if their predictive power is better than any of the patterns seen thus far. If so, it begins using that pattern to choose move probabilities.

This works fine as a proof-of-concept of the basic ideas. Additional improvements could be had by doing some code optimization to speed things up a bit, and keeping a list of good patterns and allowing a bit more dexterity in switching between the patterns used for prediction.

Escape From the Box Factory: Better Single Variable Optimization Problems

I’m teaching an intro calculus class this year (specifically, ‘Math for Life and Social Science’), and came a while ago to the section on optimization.  It’s a really important subject, and yet the optimization problems one finds in Calculus books (even good ones) tend to be contrived examples which I refer to as ‘box factory problems.’  Things along the lines of ‘minimize the surface area of a rectangular box with volume 1000 cm^3‘.

Hey, kids!  Let's take a field trip to the box factory!
Hey, kids! Let’s take a field trip to the box factory!

These are fine for a problem or two: There’s a useful skill in taking a real-sounding problem and translating it into the mathematics you’re learning.  We use the constraints (in this case, on the volume) to reduce the number of dimensions, turn the problem into a one-variable calculus problem, and then solve.  All well and good, but these problems somehow completely miss the impact of optimization on society at large.  Largely because the optimization problems that occur most commonly in the wild have a slightly different flavour.

Problem: In Boston, we observe that the monthly rent for three one-bedroom apartments are $1300, $1150, and $950.  Rent on three two-bedroom apartments were $1500, $1700, and $1200.  Assuming that the cost of a 0-bedroom apartment is $500, find the best possible line describing the rent as a function of the number of bedrooms. Continue reading

Roshambo Part III – Representation Theory

In the last two posts, we’ve looked at using machine learning for playing iterated Roshambo.  Specifically, we saw how to use Bayes’ theorem to try to detect and exploit patterns, and then saw how Fourier transforms can give us a concrete measurement of the randomness (and non-randomness) in our opponent’s play.  Today’s post is about how we can use representation theory to improve our chances of finding interesting patterns.

Niels Henrik Abel, for whom 'Abelian groups' are named.
Niels Henrik Abel, for whom ‘Abelian groups’ are named.  These are groups where xy=yx for any x, y.

Continue reading

Roshambo Part II – Fourier Analysis

In the last post, we looked at using an algorithm suggested by Bayes’ Theorem to learn patterns in an opponent’s play and exploit them.  The game we’re playing is iterated rock-paper-scissors, with 1000 rounds of play per game.  The opponent’s moves are a string of choices, ‘r’, ‘p’, or ‘s’, and if we can predict what they will play, we’ll be able to beat them.  In trying to discover patterns automatically we’ll gain some general knowledge about detecting patterns in streams of characters, which has interesting applications ranging from biology (imagine ‘GATC’ instead of ‘rps’) to cryptography.

Fourier analysis is helpful in a wide variety of domains, ranging from music to image encoding.  A great example suggested by ‘Building Machine Learning Algorithms with Python‘ is classifying pieces of music by genre.  If we’re given a wave-form of a piece of music, automatically detecting its genre is difficult.  But applying the Fourier transform breaks the music up into its component frequencies, which turn out to be quite useful in determining whether a song is (say) classical or metal.

This goat recognizes and apparently enjoys metal.  Possibly using a furrier transform...
This goat recognizes and apparently enjoys metal. Possibly using a furrier transform… (sorry.)

Continue reading

Thoughts on Machine Learning, Representation Theory, and RoShamBo – Part I

I’ve recently been doing some reading on machine learning with a mind towards applying some of my prior knowledge of representation theory.  The initial realization that representation theory might have some interesting applications in machine learning came from discussions with Chris Olah at the Toronto HackLab a couple months ago; you can get interesting new insights by exploring new spaces!  Over winter break I’ve been reading Bishop’s ‘Pattern Recognition and Machine Learning‘ (slowly), alongside faster reads like ‘Building Machine Learning Systems with Python.‘  As I’ve read, I’ve realized that there is plenty of room for introducing group theory into machine learning in interesting ways.  (Note: This is the first of a few posts on this topic.)

There’s a strong tradition of statisticians using group theory, perhaps most famously Persi Diaconis, who used representation theory of the symmetric group to find the mixing time for card shuffling.  His notes ‘Group Representations in Probability and Statistics‘ are an excellent place to pick up the background material with a strong eye towards applications.  Over the next few posts I’ll make a case for the use of representation theory in machine learning, emphasizing automatic factor selection and Bayesian methods.

First, an extremely brief overview of what machine learning is about, and an introduction to using a Bayesian approach to play RoShamBo, or rock paper scissors.  In the second post, I’ll motivate the viewpoint of representation by exploring the Fourier transform and how to use it to beat repetitive opponents.  Finally, in the third post I’ll look at how we can use representations to select factors for our Bayesian algorithm by examining likelihood functions as functions on a group.

RPS25 is a generalized form of Rock Paper Scissors.  I'll leave generalizations of this post to the RPS25 case as an exercise to the reader...
RPS25 is a generalized form of Rock Paper Scissors. I’ll leave generalizations of this post to the RPS25 case as an exercise to the reader…

Continue reading

Know Your Sensor

Recently I’ve been playing with building a regression model for the brightness of images produced with the Raspberry Pi’s camera board. Essentially, I want to quickly figure out – hopefully from a single image – what shutter speed and ISO to choose to get an image of a given brightness.

This is a pretty standard regression problem: We some data, extract some information from it, and use that information to make a prediction. To get a better handle on the algorithms involved, I wrote my own code to perform the regression, using NumPy for fast linear algebra operations. You always learn something from re-inventing the wheel, after all.

Data from images of my back yard.  Each point is one image.  The y-axis is the brightness of the image, the x-axis is the ISO the image was taken at, and the color indicates the shutter speed the image was taken with.
Data from images of my back yard. Each point is one image. The y-axis is the brightness of the image, the x-axis is the ISO the image was taken at, and the color indicates the shutter speed the image was taken with.  The shutter speeds and ISO’s chosen were evenly spaced (except for a few rogue images).  We can see there’s a kind of ‘sweet area’ for SS/ISO, with some upper threshold where things get too bright/dark very quickly.  We also see that the increase in brightness as ISO increases is non-linear.

Continue reading