Allocating Scarce Resources: TTRPG Edition

My favorite games convention has been going through some growing pains over the last couple of weeks. It’s a tabletop roleplaying game convention, which presents an interesting resource allocation problem: You have some big number of participants N, and a collection of games and events, each of which can take some number of participants ranging from 3 to 50.

Little Red Riding Hood filling out a ballot in a drab school gymnasium turned into a polling place. Little Red Riding Hood filling out a ballot in a drab school gymnasium turned into a polling place. Highly detailed, cinematic lighting, in the style of Norman Rockwell and Takashi Murakami. (OpenJourney v4)
Little Red Riding Hood fills out a ranked-choice ballot.

To date, the convention uses a First-Come-First-Served (FCFS) strategy for allocating players to games. Signups open at a specific time, and everyone hits refresh on the website feverishly, trying to get into a few very high-popularity games, and then branching out to other interesting-looking games once the super-popular games are filled.

The FCFS strategy has a couple big problems: Most visibly, the servers have a tendency to melt down when everyone shows up in the first minute after signups open, clicking around looking for their favorite game which hasn’t filled up in the first milliseconds. Less visibly, folks who can’t be available during the moment sign-ups open may miss out on many of their preferred games.

So FCFS gives us both a technical problem (a fixed deadline causing computer meltdowns), and a fairness problem. The same problem occurs pretty much anytime someone tries to solve a resource-allocation problem online with FCFS. (Notably, it’s impossible to get a camp site in California without either writing a bot or paying a third-party who wrote a bot.)

A Better World is Possible

To fix both problems, we need to a) give people a way to express their preferences asynchronously, and b) design an allocation algorithm to assign people to games given their preferences.

Here’s a proposal for how to do it.

First, we allow people to submit their ranked preferences anytime before a fixed deadline. This gives everyone plenty of time to peruse the available games, move things around in their preference list, etc.

The allocation algorithm: Once the deadline passes, we look at all of the ranked preferences and pair people with games. One way to do this is to randomly order everyone, and then go through the list and sign people up for their top-ranked game which isn’t already full. Then go through the list again, assigning people to their top choices which aren’t already full, or conflicting with things they are already signed up for. Repeat as many times as you like. This allocation algorithm is pretty trivial to write, and should run in at most a few seconds.

After the initial allocation is run, one can let things settle for a while, and then run the allocation again after giving people time to adjust preferences. And finally, open FCFS signups to help fill any remaining slots with little friction.

This is fair, in the sense that everyone has the same chance to be at the top of the random list, and furthermore everyone gets their top-available choice before anyone gets assigned to a second game.

This also removes most of the pressure on the server: People submit their preferences at random times before the deadline. You will still get some ‘crunch’ near the deadline, but likely nothing like what is seen with FCFS. And if the deadline needs to be extended, it’s no problem – at worst, people have a bit more time to submit their preferences before the allocation algorithm is run, which isn’t a problem at all. The allocation algorithm works the same whenever you get around to running it.

I think the hardest part of this proposal is likely the UX for collecting people’s ranked preferences and letting them re-order them. I imagine each game description page having a ‘heart’ button. Then each user has can go to a page where they can order the ‘heart’ed games, probably with the game title, date+time, game host, and perhaps a small user-editable notes field (if we’re feeling fancy).

A Small Amount of Further Analysis

The proposed method mixes a lottery with ranked-choice voting.

Lotteries are a well-loved method for fairly distributing scarce resources: Let everyone who is interested sign up, and then distribute the resources randomly. Weighted lotteries have been proposed for situations where uniform-randomness isn’t desired. One could use weightings to give certain preferred groups a higher chance of getting an early allocation (First time attendees? Registered game masters?), though I expect it’s more trouble than it’s worth.

Ranked-choice voting progressively eliminates candidates, using each person’s current-top-preference as their vote in each round. (When it comes to elections, I have a strong preference for approval voting, due to issues with randomness in the elimination order leading to strange outcomes in the usual ranked-choice scheme, and thus encouraging convoluted strategic voting strategies. However, I don’t think this is a problem in the games convention context.)

Finally, it’s possible to make the system deterministic, replicable, and auditable by using a seeded random number generator to produce the random ordering of the participants. (Some care may be needed to avoid gaming the system if the seed is publicly known before the allocation algorithm is run.)

Seeing Like a Statistical Learning Algorithm

image

I recently had the pleasure of reading James Scott’s “Seeing Like a State,” which examines a certain strain of failure in large centrally-organized projects. These failures come down to the kinds of knowledge available to administrators and governments: aggregates and statistics, as opposed to the kinds of direct experience available to the people living ‘on the ground,’ in situations where the centralized knowledge either fails to or has no chance to describe a complex reality.  The book classifies these two different kinds of knowledge as techne (general knowledge) and metis (local knowledge).  In my reading, the techne – in both strengths and shortcomings – bears similarity to the knowledge we obtain from traditional algorithms, while metis knowledge is just starting to become available via statistical learning algorithms.

In this (kinda long) post, I will outline some of the major points of Scott’s arguments, and look at how they relate to modern machine learning.  In particular, the divides Scott observes between the knowledge of administrators and the knowledge of communities suggest an array of topics for research.  Beyond simply looking at the difference between the ways that humans and machines process data, we observe areas where traditional, centralized data analysis has systematically failed. And from these failures, we glean suggestions of where we need to improve machine learning systems to be able to solve the underlying problems.

Continue reading

Space Filling Curves in Simulated Cities

I’ve been playing around with Cities: Skylines recently, the super-popular SimCity knock-off.  Dealing with traffic is a core theme of the game (as it should be).  Traffic tends to accumulate at intersections, and it’s well known that one-way streets have higher traffic flow.  The logical conclusion, then, is to try to build a city with a single extremely long one-way street…  Unfortunately we have to compromise on this perfect vision, because people want to get back home after work and so on.

hilbertville4
Hilbertville: The space-filling city.

A Quick Intro to Space Filling Curves

Meanwhile, a space-filling curve is an mathematical invention of the 19th century, and one of the earlier examples of a fractal.  The basic idea is to define a path that passes through every point of a square, while also being continuous.  This is accomplished by defining a sequence of increasingly twisty paths (H1, H2, H3, …) in such a way that H∞ is well-defined and continuous.  Of course, we don’t want a infinitely twisty road, but the model of the space filling curve will still be useful to us.

There are a few important ideas in the space filling curve.  The first is a notion that by getting certain properties right in the sequence of curves H1, H2, H3, …, we’ll be able to carry those properties on to the limit curve H∞.

moore-curve-stages-0-through-5
The Moore curve is an example of a (continuous!) space-filling curve.  The color changes keep track of time as you move along the path; notice that in the very twisty sixth path, a small change in time still keeps you in the same (blocky) neighborhood.


The second main idea is how to get continuity.  Thinking of the curve as a function where you’re at the start at time 0 , and you always get to the end at time 1, we want an H∞ where small changes in time produce small changes in position.  The tricky part here is that the path itself gets longer and longer as we try to fill the square, potentially making continuity hard to satisfy…  When the length of the path doubles, you’re moving twice as fast.

In fact, because of continuity, you can also “go backwards:” Given a point in the square, you can approximate what time you would have passed through the point on the limit curve H∞, with arbitrary precision.  This gives direct proof that the curve actually covers the whole square.

Here’s an example of a space filling curve which is not continuous. Define Bk as the curve you get from following these instructions:

  1. Start in the lower-left corner.
  2. Go to the top of the square, and then move right by 1/k.
  3. Move to the bottom of the square, and move right by 1/k.
  4. Repeat steps 2 and 3 until you get to the right side of the square.

The problem here is that a very small change in time might take us all the way from the top of the square to the bottom of the square.  We need to be twistier to make sure that we don’t jump around in the square.  The Moore curve, illustrated above, does his nicely: small changes in time (color) don’t move you from one side of the square to the other.

Actual Simulated Cities

What happens if we try to use space filling curves to build a city in Cities: Skylines?

My first attempt at building ‘Hilbertville’ was to make large blocks, with a single, winding one-way road for access, using the design of a (second-order) Hilbert Curve.  In addition to the roads, though, I placed a number of pedestrian walkways, which allow people on foot to get in and out of these neighborhoods directly.  I like to think that this strongly encourages pedestrian transit, though it’s hard to tell what people’s actual overall commuting choices are from the in-game statistics.

hilbertville2
Hilbertville, two blocks. The lower view highlights where the roads are, and are color-coded for the amount of congestion along each segment of roadway. There’s a bit of congestion at the output of each block, as cars wait at the intersections.

Skylines only allows building directly facing a road; corners tend to lead to empty space.  You can see a large empty square in the middle of the two blocks pictured above.  There are also two smaller rectangles and two small empty squares inside of each of these two blocks.  Making the top ‘loop’ a little bit longer removed most of the internal empty space.  This internal space is bad from the game perspective; ideally we would still be able to put a park in the empty spaces to allow people to extra space, but even parks require road access.

Intersections with the main connecting roads end up as ‘sinks’ for all of the traffic congestion.  So we should try to reduce the number of such intersections…  The Moore curve is a slight variation on the Hilbert curve which puts the ‘start’ and ‘finish’ of the path next to one another.  If we merge the start and finish into a wide two-way road, we get this:

hilbertville3
A space-filling road based on the Moore curve.

We still get the wasted square between neighborhoods, but somewhat reduce the amount of empty interior space.  Potentially, we could develop a slightly different pattern and alternate between blocks to eliminate the lost space between blocks.  Also, because the entrance and exit to the block coincide, we get to halve the number of intersections with the main road, which is a big win for traffic congestion.

Here’s a view of the full city; it’s still not super big, with a population of about 25k.  We still get pretty heavy congestion on the main ring road, though the congestion is much less bad than in some earlier cities I built.  In particular, the industrial areas (with lots of truck traffic) do much better with these long and twisty one-way streets.

hilbertville4
Full view of Hilbertville, population 25k. There are fourteen blocks here; note the two Moore curve blocks at the bottom, and the slightly modified Hilbert curve block in the lower left, which reduces empty space at the expense of a slightly longer road.

The empty space is actually caused by all of the turns in the road; fewer corners implies fewer wasted patches of land.  The easiest way to deal with this is to just use a ‘back-and-forth’ one-way road, without all of the fancy twists.

The other major issue with this style of road design is access to services.  Fire trucks in particular have a long way to go to get to the end of a block; the ‘fire danger’ indicators seem to think this is a bad idea.  I’m not sure if it’s actually a problem, though, as the amount of traffic within a block is next to none, allowing pretty quick emergency response in spite of the distance.

Overall, I would say it’s a mixed success.  There’s not a strong reason to favor the twisty space-filling curves over simpler back-and-forth one-way streets, and in either case the access for fire and trash trucks seems to be an issue.  The twistiness of the space-filling curve is mainly used for getting the right amount of locality to ensure continuity in the limit curve; this doesn’t serve a clear purpose in the design of cities, though, and the many turns end up creating difficult-to-access corner spaces.  On the bright side, though, traffic is reduced and pedestrian transit is strongly encouraged by the design of the city.

Principal Component Analysis via Similarity

PCA illustration from Wikipedia.
PCA illustration from Wikipedia.

Recently I’ve seen a couple nice ‘visual’ explanations of principal component analysis (PCA).  The basic idea of PCA is to choose a set of coordinates for describing your data where the coordinate axes point in the directions of maximum variance, dropping coordinates where there isn’t as much variance.  So if your data is arranged in a roughly oval shape, the first principal component will lie along the oval’s long axis.

My goal with this post is to look a bit at the derivation of PCA, with an eye towards building intuition for what the mathematics is doing.

Continue reading

Roshambo Part III – Representation Theory

In the last two posts, we’ve looked at using machine learning for playing iterated Roshambo.  Specifically, we saw how to use Bayes’ theorem to try to detect and exploit patterns, and then saw how Fourier transforms can give us a concrete measurement of the randomness (and non-randomness) in our opponent’s play.  Today’s post is about how we can use representation theory to improve our chances of finding interesting patterns.

Niels Henrik Abel, for whom 'Abelian groups' are named.
Niels Henrik Abel, for whom ‘Abelian groups’ are named.  These are groups where xy=yx for any x, y.

Continue reading

Roshambo Part II – Fourier Analysis

In the last post, we looked at using an algorithm suggested by Bayes’ Theorem to learn patterns in an opponent’s play and exploit them.  The game we’re playing is iterated rock-paper-scissors, with 1000 rounds of play per game.  The opponent’s moves are a string of choices, ‘r’, ‘p’, or ‘s’, and if we can predict what they will play, we’ll be able to beat them.  In trying to discover patterns automatically we’ll gain some general knowledge about detecting patterns in streams of characters, which has interesting applications ranging from biology (imagine ‘GATC’ instead of ‘rps’) to cryptography.

Fourier analysis is helpful in a wide variety of domains, ranging from music to image encoding.  A great example suggested by ‘Building Machine Learning Algorithms with Python‘ is classifying pieces of music by genre.  If we’re given a wave-form of a piece of music, automatically detecting its genre is difficult.  But applying the Fourier transform breaks the music up into its component frequencies, which turn out to be quite useful in determining whether a song is (say) classical or metal.

This goat recognizes and apparently enjoys metal.  Possibly using a furrier transform...
This goat recognizes and apparently enjoys metal. Possibly using a furrier transform… (sorry.)

Continue reading

Mombasa Algebraic Geometry Workshop

Balázs Szendrői works with students at the Mombasa algebraic geometry workshop.
Balázs Szendrői works with participants at the Mombasa algebraic geometry workshop.

This week I’ve been giving lectures at an algebraic geometry workshop in Mombasa.  I know what some of you are thinking: ‘But Tom, you’re nothing like an algebraic geometer!’  And that’s true.  But often the best way to learn something is by putting yourself in a position where you have to know it, like standing in front of fifty people expecting a clear explanation.  In this case, I’ve learned some basics of Grobner bases (mostly from an excellent book by Cox, Little, and Shea) and have been augmenting the lectures and exercise sessions with Sage.  I’ve written up some notes on the talks, and I’ll probably convert them into a web-based format with Sage cells and stuff sometime next week…

The workshop has participants attending from Kenya, Tanzania, Uganda, Rwanda, and Zambia.  The participants have been super motivated; we’re just finishing up the third and final week of the workshop, and the participants have been staying up late working on final projects.  Attendance has stayed high throughout the workshop, to a degree you wouldn’t expect in (say) North America.  The chance for exposure to math going on at the international level is a big draw, since it’s still so rare for international mathematicians to come through East Africa.  I imagine it’s like if you only got to eat ice cream once every year or three: you’re not going to let anything go to waste.

Continue reading

Bahir Dar Maths Camp

Group shot from the Bahir Dar math camp.  The waterfall in the background is one of the major sources of the Nile!
Group shot from the Bahir Dar math camp. The waterfall in the background is one of the major sources of the Nile!

I’ve spent this last week helping with the first-ever Ethiopian math camp, hosted by the math department at Bahir Dar university. As with the Maseno math camp, we focused on giving activity-based sessions, teaching interesting math topics outside of the standard curriculum. The intention is to boost student interest in maths and to expose some teachers to different ways of thinking about mathematics. The big difference between the Maseno and BDU camps is that the students in Ethiopia are mainly Amharic speakers, with maybe a couple years of learning English under their belt. This makes it essential to build up and utilize the local staff to a degree that we aren’t forced to in Maseno. Luckily, the local staff is bright, imaginative, and ready to try new things. On the whole, it was a fantastic first attempt.

We gave thirty-ish sessions, divided into five topic areas: geometry, scientific research, card tricks, history of numbers, and ‘rules.’ I was a few days late arriving at the planning week, due to some medical exams I needed to get done in Addis, and so was mainly designing the card trick sessions. I also did a lot with the geometry group and gave a session each on cryptography and complex numbers.

Continue reading

Two Weeks in Paris

A quick game of Go with Yan X Zhang at the Sage-Days in Orsay.  I lost badly!
A quick game of Go with Yan X Zhang at the Sage-Days in Orsay. I lost badly!

Back for a day in Nairobi after visiting Paris for FPSAC 2013 and Sage-Days 49.  On the whole, it was a really productive visit; I met a number of my primary goals.  On the mathematics front, Kenya has been extremely isolating: One of the big goals for the conference, then, was to connect to some new things to work on and figure out what’s been happening in the algebraic combinatorics world in the last year.  It was exciting to actually work on math with people: when I arrived in Maseno, it turned out that no new graduate students had come into pure maths in some time, which meant there was no real outlet for doing math with other people.  So it’s been kind of a lonely year: I did a lot of work on education, and did some interesting community building around computer science with LakeHub, but often felt like my big area of expertise really wasn’t terribly helpful in Kenya.  The institutions weren’t really ready to make use of what I was bringing, since there wasn’t time or space for people to do research.  I obviously found lots of great stuff to work on anyway, but it felt a bit funny that I was so unable to engage people on the maths.

Continue reading