In the last two posts, we’ve looked at using machine learning for playing iterated Roshambo. Specifically, we saw how to use Bayes’ theorem to try to detect and exploit patterns, and then saw how Fourier transforms can give us a concrete measurement of the randomness (and non-randomness) in our opponent’s play. Today’s post is about how we can use representation theory to improve our chances of finding interesting patterns.
In the last post, we looked at using an algorithm suggested by Bayes’ Theorem to learn patterns in an opponent’s play and exploit them. The game we’re playing is iterated rock-paper-scissors, with 1000 rounds of play per game. The opponent’s moves are a string of choices, ‘r’, ‘p’, or ‘s’, and if we can predict what they will play, we’ll be able to beat them. In trying to discover patterns automatically we’ll gain some general knowledge about detecting patterns in streams of characters, which has interesting applications ranging from biology (imagine ‘GATC’ instead of ‘rps’) to cryptography.
Fourier analysis is helpful in a wide variety of domains, ranging from music to image encoding. A great example suggested by ‘Building Machine Learning Algorithms with Python‘ is classifying pieces of music by genre. If we’re given a wave-form of a piece of music, automatically detecting its genre is difficult. But applying the Fourier transform breaks the music up into its component frequencies, which turn out to be quite useful in determining whether a song is (say) classical or metal.
I’ve recently been doing some reading on machine learning with a mind towards applying some of my prior knowledge of representation theory. The initial realization that representation theory might have some interesting applications in machine learning came from discussions with Chris Olah at the Toronto HackLab a couple months ago; you can get interesting new insights by exploring new spaces! Over winter break I’ve been reading Bishop’s ‘Pattern Recognition and Machine Learning‘ (slowly), alongside faster reads like ‘Building Machine Learning Systems with Python.‘ As I’ve read, I’ve realized that there is plenty of room for introducing group theory into machine learning in interesting ways. (Note: This is the first of a few posts on this topic.)
There’s a strong tradition of statisticians using group theory, perhaps most famously Persi Diaconis, who used representation theory of the symmetric group to find the mixing time for card shuffling. His notes ‘Group Representations in Probability and Statistics‘ are an excellent place to pick up the background material with a strong eye towards applications. Over the next few posts I’ll make a case for the use of representation theory in machine learning, emphasizing automatic factor selection and Bayesian methods.
First, an extremely brief overview of what machine learning is about, and an introduction to using a Bayesian approach to play RoShamBo, or rock paper scissors. In the second post, I’ll motivate the viewpoint of representation by exploring the Fourier transform and how to use it to beat repetitive opponents. Finally, in the third post I’ll look at how we can use representations to select factors for our Bayesian algorithm by examining likelihood functions as functions on a group.
Recently I’ve been playing with building a regression model for the brightness of images produced with the Raspberry Pi’s camera board. Essentially, I want to quickly figure out – hopefully from a single image – what shutter speed and ISO to choose to get an image of a given brightness.
This is a pretty standard regression problem: We some data, extract some information from it, and use that information to make a prediction. To get a better handle on the algorithms involved, I wrote my own code to perform the regression, using NumPy for fast linear algebra operations. You always learn something from re-inventing the wheel, after all.
I’ve done quite a bit to improve my Raspberry Pi-based timelapser. I have a pretty good system together now, thanks in part to recent firmware updates to the Pi which allow me to directly control the shutter speed and ISO of the camera sensor instead of relying on auto-exposure settings. The auto-exposure is pretty unreliable from one shot to the next: the camera makes a different decision each time it snaps a picture, which leads to quite a lot of flicker. Previously, I was dealing with this flicker by manipulating the images in post-production, but I’ve now written some code to get the camera to try to maintain a constant image brightness across a long shoot.
The code now consists of a ‘timelapser’ class, which keeps track of its current shutterspeed and ISO (SS/ISO henceforth), and the brightness of the last few images taken. It then adjusts SS/ISO to try to get the image brightness to 100. By keeping track of the last few images, it is a bit less susceptible to being upset by one strange image (like, say, if I put my hand over the lens for one shot, producing a black image), or more standard movement within the frame. On the other hand, it takes a while longer to settle down to the ‘right’ SS/ISO. So it’s currently set up with an initialization step, where it finds a good SS/ISO pretty quickly, and then transitions to actually taking pictures. The result is very little flicker as the timelapse goes on, and a pretty constant level of image brightness when light levels gradually change: like when we watch dawn or dusk. (If you’re interested in playing around with the code, I’ve set up a github repository here.)
As an example, this is a video that we shot over about three days on my friends Ketan and Ananya’s balcony. They have a great view over Toronto, from the CN Tower to Honest Ed’s.
Almost immediately after my first foray with the Raspberry Pi timelapse I was contacted by Cookie Roscoe, who coordinates a weekly farmer’s market for The Stop. The Stop is a community food center here in Toronto, providing lots of programming and emergency food access for lower-income people. My partner, Elizabeth, worked with them for about three years, so I was super happy to try doing a time-lapse of their market.
Here’s the results! Lots of technical stuff about what went into the video after the break!
My recent DIY electronics project has been putting together a Raspberry Pi-based camera. The Pi foundation sells a camera board which plugs into the Pi; it’s sold as ‘a bit better than the average camera in a mobile phone.’ But the Pi’s default Raspbian Linux installation comes with a couple programs for controlling the camera, and lets you take still pictures and videos easily.
In the interest of using the camera for something you can’t usually do with a store-bought digital camera, I wrote a short python script which takes a photo assigns it a file name based on the date and time taken. It then does some sampling of the picture, and only keeps pictures which aren’t ‘too dark.’ And then cron runs the photo script once every five minutes. In other words, the Pi is set up for long-form time-lapse photography. The resulting pictures are then easy to compile into a movie.
On Tuesday evening I finally made it to the Toronto Hacklab, after meaning to make it over for weeks!
It’s a really interesting space. There are currently five 3D printers, lots of tools for playing with electronics, and a giant computer-driven laser in the bathroom for etching and cutting plastic or acrylic. (Actually, I should try to make an acrylic LakeHub logo to send back to Kenya….) According to Eric, who gave me a tour of the space, about a two thirds of the members come through to work on these sorts of projects, and a third are mainly software people who use it as a common work space.
A couple of my favorite toys that I saw on the tour were a giant LED pixel board and a ‘flip dot board’, both of which had been salvaged from the Toronto Transit Commission, which runs all the buses and subways. Members of the Hacklab built electronic interfaces to both of the devices: the LED board is run by an arduion and can be sent messages to display, and otherwise acts a clock. The flip-dot board looks like it’s run by some custom microcontrollers, and is hooked up to a joystick for playing ‘snake.’
During my early involvement in Kenya, I had the pleasure to chat with Prof. Haynes Miller, an MIT mathematician, about the potential for e-learning in Africa. He related the story of a relative who had gone to work in a small developing country, creating an interesting technological component in a library. After working for a year to get everything set up, the project leader went back to North America. Two years later, the leader went back to visit and found the library in terrible disrepair. Computers busted, the e-library unusable. The lesson here is that a successful project needs to have a good mix of leadership and capacity on the ground to continue after the mzungu founders have departed. A good project needs to have long-term participants involved in the design and deployment, and respond directly to the needs and capacities of the local community. And as one unravels these ideas, the whole notion of the ‘mzungu founders’ is itself quite problematic; our success is really measured by our ability to promote quality leadership, build capacity, and facilitate the vision of people who are committed to live and work in Africa for the long term.
All of which is to say that I think LakeHub hits a number of the right boxes. The leadership is almost entirely Kenyan, and the project is about harnessing and amplifying the abilities of the tech community in Western Kenya. I’m extremely excited to see it continue and thrive now that I’ve wandered back to North America. The Kenyans who are taking the lead on the project so fa are fantastic, and have the skills and vision to help take it forward. These are people like James Odede, Google’s student ambassador to Maseno University and head of the Maseno ICT guild, and Simeon Obwego, who works for Innovations in Poverty Action, and is a truly fantastic self-starter. Evan Green-Lowe, also from IPA, has also been a great contributor, and is largely responsible for the current push to get a full-time employee.
But we’ve run into the limits of volunteer organizing: there’s a lot of vital-but-time-consuming work that needs to get done. Like scouting for potential spaces, building partnerships with local businesses and community organizations, and, yeah, locking in funding sources. The plan is to search the budding Kenyan tech community for someone with a great combination of drive and vision to help build LakeHub up to meet its fantastic potential.
I’ve never been a hang-out-at-the-mall kind of person. The prices are higher than things I can find in the internets or strange alleys, the food courts unappealing, and the general level of “shiny” doesn’t really match my self-concept.
And yet, during my year in Kenya, I spent a lot of time at various malls. When staying in Nairobi, I would find myself in a mall every other day, it seemed. And it makes sense in retrospect: The specialist shops and on-line stores where I usually use to get my obscure electronics aren’t available in Kenya, and food safety seems a bit more reliable in the malls. The biggest factor, though, are the coffee shops. I spend a lot of time in coffee shops wherever I travel, unwinding thoughts and sketching proofs and sipping on a simple black Americano. And most of the coffee shops in Kenya are in malls.
The mall I spent the most time at in Nairobi was The Junction, and most of that time was spent at Art Caffe. It’s a space with a great slice of the mix of modern Nairobi. Yeah, there are a lot of jackass travellers like myself plugged into their laptops, but just as many people stopping through for a business lunch, mommas chatting about their kids, young people hanging out with friends, and men from unidentifiable Eastern European countries discussing possibly shady business in an unidentifiable language. It’s a great cross-section.
It’s also a bit of an escape.