I recently stumbled on the Orca sequencer, developed by Hundred Rabbits. It bills itself as an ‘esoteric programming language’ for writing music. It is purely a sequencer, and has good integration for MIDI and other output types which I haven’t explored… There are some pretty good tutorials out there already, but I wanted to illustrate a few ‘idioms’ I settled into during a weekend of experimenting with the language. It’s a great language for creating simple home-brew ‘trackers’ for controlling synthesizers.
The thing I’ve always really wanted more of in sequencers is the ability to inject ‘interesting randomness,’ such as varying a note’s gain/velocity randomly, or choosing notes from a given scale at random. SunVOX (my other favorite tracker of the 2010’s) supports random gain, but not much else AFAICT.
In this post, I’ll give a bit of intro/overview of Orca instructions, and document some recipes I came up with for making a slowly mutating bass/melody pattern. Here’s a video of the Orca programming running; it’s controlling my (terribly underutilized) OP-Z.
Suppose we want to send a message to a friend using as few bits as possible… And let’s also suppose that this message consists of unit-normally-distributed real numbers. Now, it turns out that it’s really really expensive to send arbitrary precision real numbers: You could find yourself sending some not-previously-known-to-man transcendental number, and just have to keep listing off digits forever. So, in reality, we allow some amount of noise tolerance in the encoding.
This is a good problem to sink some effort into: Lots of data is roughly normally distributed, and if you’re sending some machine learned representation, you can introduce some regularization to ensure that the data is close to unit-normal. (This is, for example, the whole idea behind Variational Auto Encoders.)
I’ll start with a quick outline of a standard solution, and then we’ll go into a half-baked idea involving Fibonacci codes and space filling curves.
The basic idea comes in two parts:
First, Fibonacci coding lets you encode an arbitrarily large integer, without having to send the length of the integer before hand. (There’s a bunch of similar schemes you can use here, but the Fibs are fun and interesting.)
Second, map the integers into the reals in a way that covers the normal distribution in a ‘progressive’ way, so that we can use small numbers if we don’t need high precision, and only use bigger numbers as we need more precision.
At the end we’ll run a few empirical tests and see how well the scheme works, and point to where there’s room for improvement.
Suppose you’re building a widget that performs some simple action, which ends in either success or failure. You decide it needs to succeed 75% of the time before you’re willing to release it. You run ten tests, and see that it succeeds exactly 8 times. So you ask yourself, is that really good enough? Do you believe the test results? If you ran the test one more time, and it failed, you would have only an 72.7% success rate, after all.
So when do you have enough data, and how do you make the decision that your success rate is ‘good enough?’ In this post, we’ll look at how the Beta distribution helps us answer this question. First, we’ll get some intuition for the Beta distribution, and then discuss why it’s the right distribution for the problem.
The Basic Problem
Consider the widget’s tests as independent boolean variables, governed by some hidden parameter μ, so that the test succeeds with probability μ. Our job, then, is to estimate this parameter: We want a model for P(μ=x | s, f ), the probability distribution of μ, conditioned on our observations of success and failure. This is a continuous probability distribution, with μ a number between zero and one. (This general setup, by the by, is called ‘parameter estimation‘ in the stats literature, as we’re trying to estimate the parameters of a well-known distribution.)
I recently had the pleasure of reading James Scott’s “Seeing Like a State,” which examines a certain strain of failure in large centrally-organized projects. These failures come down to the kinds of knowledge available to administrators and governments: aggregates and statistics, as opposed to the kinds of direct experience available to the people living ‘on the ground,’ in situations where the centralized knowledge either fails to or has no chance to describe a complex reality. The book classifies these two different kinds of knowledge as techne (general knowledge) and metis (local knowledge). In my reading, the techne – in both strengths and shortcomings – bears similarity to the knowledge we obtain from traditional algorithms, while metis knowledge is just starting to become available via statistical learning algorithms.
In this (kinda long) post, I will outline some of the major points of Scott’s arguments, and look at how they relate to modern machine learning. In particular, the divides Scott observes between the knowledge of administrators and the knowledge of communities suggest an array of topics for research. Beyond simply looking at the difference between the ways that humans and machines process data, we observe areas where traditional, centralized data analysis has systematically failed. And from these failures, we glean suggestions of where we need to improve machine learning systems to be able to solve the underlying problems.
I’ve just finished reading ‘Turing’s Cathedral,’ an absolutely fascinating book on early computing. The book focuses almost entirely on John von Neumann’s ‘Electronic Computer Project’ carried out at the Institute of Advanced Study in the 40’s and 50’s. In fact, Turing makes only occasional and brief appearances in the book: it’s almost entirely a portrait of the IAS project, and the wide range of personalities involved.
For me, the book emphasized the importance of overcoming (or circumventing) boundaries in the pursuit of scientific progress. Von Neumann in particular became obsessed with applications (particularly after Godel’s theorem put an end to the Hilbert programme), and served as a bridge between pure and applied mathematics. Meanwhile, construction of the physical computer brought in a variety of brilliant engineers. It’s clear that the departmental politics at the IAS were still quite strong – the pure mathematicians didn’t have much regard for the engineers, and the computer project ground to a halt quite senselessly after von Neumann left the IAS. Dyson argues that Princeton missed an opportunity to be a world center for computing theory and practice as a result.
I’ve been playing around with Cities: Skylines recently, the super-popular SimCity knock-off. Dealing with traffic is a core theme of the game (as it should be). Traffic tends to accumulate at intersections, and it’s well known that one-way streets have higher traffic flow. The logical conclusion, then, is to try to build a city with a single extremely long one-way street… Unfortunately we have to compromise on this perfect vision, because people want to get back home after work and so on.
A Quick Intro to Space Filling Curves
Meanwhile, a space-filling curve is an mathematical invention of the 19th century, and one of the earlier examples of a fractal. The basic idea is to define a path that passes through every point of a square, while also being continuous. This is accomplished by defining a sequence of increasingly twisty paths (H1, H2, H3, …) in such a way that H∞ is well-defined and continuous. Of course, we don’t want a infinitely twisty road, but the model of the space filling curve will still be useful to us.
There are a few important ideas in the space filling curve. The first is a notion that by getting certain properties right in the sequence of curves H1, H2, H3, …, we’ll be able to carry those properties on to the limit curve H∞.
The second main idea is how to get continuity. Thinking of the curve as a function where you’re at the start at time 0 , and you always get to the end at time 1, we want an H∞ where small changes in time produce small changes in position. The tricky part here is that the path itself gets longer and longer as we try to fill the square, potentially making continuity hard to satisfy… When the length of the path doubles, you’re moving twice as fast.
In fact, because of continuity, you can also “go backwards:” Given a point in the square, you can approximate what time you would have passed through the point on the limit curve H∞, with arbitrary precision. This gives direct proof that the curve actually covers the whole square.
Here’s an example of a space filling curve which is not continuous. Define Bk as the curve you get from following these instructions:
Start in the lower-left corner.
Go to the top of the square, and then move right by 1/k.
Move to the bottom of the square, and move right by 1/k.
Repeat steps 2 and 3 until you get to the right side of the square.
The problem here is that a very small change in time might take us all the way from the top of the square to the bottom of the square. We need to be twistier to make sure that we don’t jump around in the square. The Moore curve, illustrated above, does his nicely: small changes in time (color) don’t move you from one side of the square to the other.
Actual Simulated Cities
What happens if we try to use space filling curves to build a city in Cities: Skylines?
My first attempt at building ‘Hilbertville’ was to make large blocks, with a single, winding one-way road for access, using the design of a (second-order) Hilbert Curve. In addition to the roads, though, I placed a number of pedestrian walkways, which allow people on foot to get in and out of these neighborhoods directly. I like to think that this strongly encourages pedestrian transit, though it’s hard to tell what people’s actual overall commuting choices are from the in-game statistics.
Skylines only allows building directly facing a road; corners tend to lead to empty space. You can see a large empty square in the middle of the two blocks pictured above. There are also two smaller rectangles and two small empty squares inside of each of these two blocks. Making the top ‘loop’ a little bit longer removed most of the internal empty space. This internal space is bad from the game perspective; ideally we would still be able to put a park in the empty spaces to allow people to extra space, but even parks require road access.
Intersections with the main connecting roads end up as ‘sinks’ for all of the traffic congestion. So we should try to reduce the number of such intersections… The Moore curve is a slight variation on the Hilbert curve which puts the ‘start’ and ‘finish’ of the path next to one another. If we merge the start and finish into a wide two-way road, we get this:
We still get the wasted square between neighborhoods, but somewhat reduce the amount of empty interior space. Potentially, we could develop a slightly different pattern and alternate between blocks to eliminate the lost space between blocks. Also, because the entrance and exit to the block coincide, we get to halve the number of intersections with the main road, which is a big win for traffic congestion.
Here’s a view of the full city; it’s still not super big, with a population of about 25k. We still get pretty heavy congestion on the main ring road, though the congestion is much less bad than in some earlier cities I built. In particular, the industrial areas (with lots of truck traffic) do much better with these long and twisty one-way streets.
The empty space is actually caused by all of the turns in the road; fewer corners implies fewer wasted patches of land. The easiest way to deal with this is to just use a ‘back-and-forth’ one-way road, without all of the fancy twists.
The other major issue with this style of road design is access to services. Fire trucks in particular have a long way to go to get to the end of a block; the ‘fire danger’ indicators seem to think this is a bad idea. I’m not sure if it’s actually a problem, though, as the amount of traffic within a block is next to none, allowing pretty quick emergency response in spite of the distance.
Overall, I would say it’s a mixed success. There’s not a strong reason to favor the twisty space-filling curves over simpler back-and-forth one-way streets, and in either case the access for fire and trash trucks seems to be an issue. The twistiness of the space-filling curve is mainly used for getting the right amount of locality to ensure continuity in the limit curve; this doesn’t serve a clear purpose in the design of cities, though, and the many turns end up creating difficult-to-access corner spaces. On the bright side, though, traffic is reduced and pedestrian transit is strongly encouraged by the design of the city.
I recently read Edward Tufte’s ‘Visualizing Quantitative Information,’ a classic book on visualizing statistical data. It reads a little bit like the ‘Elements of Style’ for data visualization: Instead of ‘omit needless words,’ we have ‘maximize data-ink.’ Indeed, the primary goal of the book is to establish some basic design principles, and then show that those principles, creatively applied, can lead to genuinely new modes of representing data.
One of my favorite graphics in the book was a scatter plot adapted from a physics paper, mapping four dimensions in a single graphic. It’s pretty typical to deal with data with much more than three dimensions; I was struck by the relative simplicity with which this scatter plot was able to illustrate four dimensional data.
I hacked out a bit of python code to generate similar images; here’s a 4D scatter plot of the Iris dataset:
I met up with some mathematician friends in Toronto yesterday, who were interested in how one goes about getting started on machine learning and data science and such. There’s piles of great resources out there, of course, but it’s probably worthwhile to write a bit about how I got started, and place some resources that might be of more interest to people coming from a similar background. So here goes.
First off, it’s important to understand that machine learning is a gigantic field, with contributions coming from computer science, statistics, and occasionally even mathematics… But on the bright side, most of the algorithms really aren’t that complicated, and indeed they can’t be if they’re going to run at scale. Overall though, you’ll need to learn some coding, algorithms, and theory.
Oh, and you need to do side-projects. Get your hands dirty with a problem quickly, because it’s the fastest way to actually learn.
Recently I’ve seen a couple nice ‘visual’ explanations of principal component analysis (PCA). The basic idea of PCA is to choose a set of coordinates for describing your data where the coordinate axes point in the directions of maximum variance, dropping coordinates where there isn’t as much variance. So if your data is arranged in a roughly oval shape, the first principal component will lie along the oval’s long axis.
My goal with this post is to look a bit at the derivation of PCA, with an eye towards building intuition for what the mathematics is doing.
This was a bit different from other Kaggle competitions. Typically, a Kaggle competition will provide a large set of data and want to optimize some particular number (say, turning anonymized personal data into a prediction of yearly medical costs). The dataset here intrigued me because it’s about learning from and reconstructing graphs, which is a very different kind of problem. In this post, I’ll discuss my approach and insights on the problem.