View: compact / full
Time: backward / forward
Source: internal /
external /
all
Public RSS / JSON
Previously on project Betfair, we implemented a simple market making strategy and backtested it against some horse races, with disastrous results. Today, things get better.
Though most of this has been done on horse racing markets, I was also experimenting on-and-off with greyhound racing. I'll be looking at those in depth later, but for now, some quick plots.
First, most action in greyhound markets happens way closer to the beginning of the race. Half of all the bets are matched less than a minute before the race starts!
In addition, the back-lay spreads are much, much wider than in horse markets. Even 2 minutes before start there can be 5 ticks between the best back and lay quote.
So how does the naive version of CAFBot do on these markets?
Not that well. Looking at the odds chart for an example market, it seems like there are a lot of sporadic jumps where, sort of like in the example horse market from the previous post, the price goes "through" us (i.e. we're the last in the queue at a given price level to get our bet matched, after which that price level flips to the opposite side of the book).
Like I had mentioned before, there are plenty of papers on market making. Despite that they're written with real-world (equity, futures etc) markets in mind, there are still a lot of things they investigate that can apply to this problem.
High-frequency market-making with inventory constraints and directional bets by Fodra, Labadie (2012) has a decent introduction to what a market maker does and what sort of controls it can have over its behaviour (together with some alarmingly large differential equations). It's based on an earlier paper: High-frequency trading in a limit order book by Avellaneda, Stoikov (2008) which had some very good presentation slides that I got some of the following ideas from. Finally, Exploring Market Making Strategy for High Frequency Trading: An Agent-Based Approach by Xiong, Yamada, and Terano has a nice overview of several market-making strategies.
According to those papers, there are two risks a market maker faces. The first is inventory risk, which is what we had an issue with last time. If one side of a market maker's quotes gets executed more often than the other, it accumulates an inventory of shares (in the case of equity markets) and is no longer market-neutral (its profit is now dependent on price moves in the market).
There are two main ways to control our market maker: by specifying at what odds (or whether) it makes the market and the number of one-pound contracts it offers on each side. For example, if its exposure is getting too positive (it's backed a runner too much), it can either decrease its offers in size or offer other people to lay at higher odds, thus getting a higher premium from increasing its exposure even more. Managing inventory risk is essentially a trade-off: if we limit it to a minimum, we might miss out on earning the spread when there's not much price movement going on, whereas if we don't control it at all, we risk exposing ourselves to the market too much (having to liquidate our position in the end).
My first attempt at managing inventory risk was fairly simple: let's give the bot a maximum inventory it can have and size its quotes so that the amount it offers is scaled linearly from the full amount when it has no exposure down to 0 when it's at a maximum. For example, if the maximum amount of contracts it can offer is 10 and the maximum inventory it wants is 30:
So, did it work?
Sort of. This is the PnL and position chart for the same market as in the previous post. It looks like the exposure management works: it's almost always bounded between -15 and +15 contracts. The profit has improved vastly (we now lose about £0.18 instead of £1.70 from the previous version), but this could be simply due to the fact that our average exposure is now smaller (on average -5 contracts) than that of the previous version (on average -65 contracts).
Running it on the full dataset we get this sad picture.
While the average profit increased, its standard deviation didn't decrease as much: the Sharpe ratio of this strategy is -0.91. To compare, the previous incarnation of CAFBot had a Sharpe ratio of -1.52 / 3.55 = -0.43.
This was a small feature that I had there from the beginning, actually. The naive CAFBot from the previous post only ever has offers at two odds: the best back and the best lay. But consider this:
What happened there at the 2.00 back level was that we had to cancel a bet and then replace it shortly after when the prices recovered, thus losing our position in the match queue. The addition I had was to maintain bets at several price levels: best back, 1 tick lower than the best back and 2 ticks lower than the best back (likewise for the lay side) etc. That meant, at a larger capital commitment, that the bot would still maintain its position in the queue if the prices changed for a bit. This doesn't actually require any more betting actions to do for a 1-tick move: instead of cancelling a bet at 2.00 and replacing it at 2.02 the bot would, say, cancel a bet at 1.99 and replace it at 2.02.
The papers also mention where to make the market as a way to control the inventory (some of them define it as a mid ± spread). Most of them use a solution to those differential equations in order to come up with an function defining the bid and ask to make a market at, but I decided not to go with that, since Betfair markets have varying granularities and spreads are usually already tight enough (in essence, anything that was more than a few ticks away from the best bid wouldn't have a chance to get matched anyway).
Instead, I used an idea from Avellaneda and Stoikov's presentation slides where their agent would move the quote by 1 tick on one side after it had accumulated a certain inventory. So, for example:
For my limit in this test, I used 30 contracts: with 10 contracts offered, that meant the bot would move the quote after its offer got matched 3 times on one side.
I also used the making market on several odds feature to not cancel bets unless the price moves too far away from them, placing bets at 3 levels on both sides of the book.
So, how did it do on our test market?
Oooh. We finally started making money, kind of. Closer to the race beginning, as matched volumes picked up, the MTM shot up dramatically, albeit dropping as dramatically in the end. It still ended trading with about £0.60 of profit, which is pretty cool.
In addition, the average exposure was -6.6 contracts: in a market where the odds were trending down (they went from about 3.9 to 3.4, 10 ticks), the bot's exposure was against it (since the contract price, the inverse of the odds, went up) and yet it still made money. Hence the bot did manage to profit from the spread.
Finally, placing our bets on several levels helped too: on this plot, the bold green and red lines are price levels at which the bot had an offer. You can see it's often the case that a green/red line is not interrupted when the price moves slightly away from it, meaning the bot doesn't cancel a bet and lose its position in the queue. In fact, on the same market, but with only 1 bet on both sides, the bot would have made only £0.20.
Let's test it on the whole dataset now.
Not that good, actually. The Sharpe ratio is better than the previous version, but we lost more money on average. At least making a market at 3 price levels helped (below are the results for placing just 1 bet on each side).
So what happened? Let's take a look at the worst market we traded.
It's... choppy. The price is jumping all over the place and there are very few times when it doesn't move (which is where this strategy really shines). There are, on the other hand, many points where the price goes "through" the bot's bet, resulting in it having an opposite exposure to the way the price went.
Looking closer, we can see how wild this market is: in the span of 90 seconds the odds crashed from 1.80 down 10 ticks to 1.70 and then back up again. In addition, in the first 30 seconds the best back/lay oscillated between 2 levels and the bot would diligently follow it, getting executed at 1.79 and then having its position closed at the same or worse price.
So all in all, while inventory control does help the bot not place too many directional bets, it still doesn't help it make money.
The second risk a market maker faces is adverse selection, the risk from information asymmetry. Someone who does cross the spread gets the benefit of choosing when their order gets executed, whereas a market maker simply posts orders and doesn't have any control over when or whether they will get hit. This creates a bias in favour of the person trading against the market maker: what if some of them are crossing the spread because they know the market is going to move, as in, they're informed traders?
Let's say there's been some sudden bad news from Apple during trading hours and people start selling Apple shares. A naive market maker that doesn't know what's happening will continue offering to buy them despite that the market clearing price will very soon move down. In the case of Betfair markets, perhaps there can be a macro-level trend (even if the trend is a self-fulfilling prophecy) that the market maker doesn't take into account and more slower traders do, thus trading in a way that perpetuates the trend.
So is there a way to protect ourselves against that? What if the bot could somehow anticipate imminent price movements that might hurt it if it has an exposure to the market and move its quotes or liquidate its position? Perhaps it can.
Next time on project Betfair, we'll delve deeper into market microstructure and find out that at a very micro level, markets sometimes aren't as unpredictable as we thought.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
Previously on project Betfair, we started collecting dumps of Betfair order book data from the Stream API and learned how to reconstruct series of events from those with the aim of using them to backtest a market making strategy.
Today, things get messy.
As a market maker, our main source of profit should be our orders getting matched, and not us having a directional view (as in betting on which way the odds will move). Hence it's worth investigating how volumes of matched bets vary as the beginning of the race approaches.
I took the total bet volumes matched on each race through time, starting from 3 hours before the race, and normalized them by diving by the final total volume just before the scheduled race start time (essentially getting the cumulative fractions of the total matched volumes). There are 24 races in this dataset: I now have quite a few more, but their recording is started much closer to the kick-off, and you'll soon see why.
As you can see, bet matching in pre-race horse trading follows a power law. About half of the total bet volume is matched 10 minutes before the race begins, and 80% of all betting happens within 50 minutes from the race start. Note this doesn't include in-play betting where I guess this gets even more dramatic.
Hence there's not much point doing market making earlier than about 15 minutes before the race. Or perhaps could it be that the reason nobody is trading is that the costs are too high?
I plotted the average back-lay spreads through time, and it doesn't seem so. The spreads here are in Betfair ticks: the smallest one is 1 (corresponding to, say, best available back odds of 1.72 and best lay odds of 1.73, or best back at 2.50 and best lay at 2.52). Even 3 hours before the race begins, the spreads, on average, are as tight as they can be.
One useful thing about way the Betfair Stream API is designed is that with the tools given to us, we can maintain a state of the order book in our program as well as our own orders. It's essentially like having the data from API-NG available to us at all times, with notifications when it gets updated.
I decided to split the market-making bot into two parts.
The core would be given the current state of the order book and output the amount of one-pound contracts that it wanted to be offering at both sides of the book, together with the odds. The core's operation would be independent of what's happening to our outstanding orders: if we already have a back bet in the book and the core of the strategy wants to be offering the same amount, then no actions would be produced.
Reconciling the core's wishes with the actual state of our orders would be the execution harness' job. Given the desired offers we wish to maintain on both sides of the book and the state of our orders, it would produce a sequence of cancellations and order placements and submit them to Betfair.
Since all of these inputs (the state of the order book and all of our orders) are actually given to us by the Betfair Stream API, this has a neat bonus of behaving in a consistent way if the bot, say, crashes: we can just start it up again and the API will helpfully give us back all our outstanding orders and positions (not that it's a good idea to blindly trust it). Polling the strategy to find out what actions it wants to perform can be done at any point in time: on a schedule or, say, whenever a new event arrives from the Betfair stream.
So the whole strategy would look something like this:
There are some operational issues with this (such as Betfair not usually allowing bet placements below £2), but this is a good general platform to build upon and definitely good enough to start simulating.
With an order book simulator already built, augmenting it to inject strategy events is fairly easy. Let's say the simulator operates by maintaining a timestamped event queue (where the events are either placing a bet or cancelling one). We add another event type at given intervals of time (say, 100ms): when the backtester sees it, it gives the current order book and the state of the strategy's orders to the strategy, gets its actions from it and then inserts them back into the queue.
To simulate the delay between us and the exchange, the events are timestamped slightly into the future (say, 75ms) so the simulator might not get to apply player orders to the book immediately and may have to handle other orders before that.
In addition, I got the backtester to record various diagnostics about the strategy every time it polled it, such as how many contracts (actually matched bets) it owned, when its orders got matched and whether the matches were passive (initiated by someone else) or aggressive (initiated by the strategy) etc.
There will also be a DAFBot at some point. This was a very early version. It's fairly different to what I actually ran to make the following plots, but the general idea is the same. I also moved the whole codebase to using integer penny amounts instead of floats soon after this commit.
It was time to find something to plug into the backtester. I decided to first go with the most naive approach: the core would look at the current best back and lay offers and then offer 10 one-pound contracts (recall that this results in a bet of £10 / odds) on both sides.
Since the execution harness compares the desired offers with the current outstanding bets, that would mean when the strategy was started, it would first place those two bets and then do nothing until at least one of them got fully or partialy matched, at which point it would add back to the bet to maintain the desired amount. If the market were to move (the best back/lay offers changed), the strategy would cancel the orders and replace them at the new best bid/offer.
Finally, 15 seconds before the race start, the strategy would aggressively unload its accumulated exposure (say, if more of its back bets were matched than the lay bets) by crossing the spread (and not waiting to get matched passively).
In terms of markets, I would simulate it from 1 hour before the beginning of the race, running it only on the favourite horse (the one with the lowest odds at that point).
With that in mind, I managed (after a few days of debugging) to simulate it on a few races. So, how did it do?
Here's a result from one race:
Well, this is not looking great.
To explain that plot, the red line is the strategy inventory, or position: how many one-pound contracts it's currently holding. For example, with a position of -170, the strategy will lose £170 if the runner we're trading wins (and £0 if it loses, but in any case we get to keep the money we got from selling the contract). The blue line is the mark-to-market of the strategy: how much money we would make/lose if we were to liquidate our position immediately by crossing the spread, like we do in the last 15 seconds before the race begins.
This is obviously just one race, but surely with an HFT strategy we would expect more consistent profit even within one race? For example, Greg Laughlin of the WSJ did a small investigation into the trading of Virtu, an HFT firm, from their abandoned IPO filings, and showed that with a 51% percentage of profitable trades and about 3 million trades a day their chance of having a losing day was negligible.
But we're not making that many trades here and so perhaps this could be a statistical fluke. It's a good thing I now have all the other race data, collected during further experiments, to check this against.
Nope. Running this against 104 races (albeit with 15 minutes of trading, since the latter datasets I collected were shorter), we can see that the bot has systematic issues and loses, on average, £1.52 per race.
So what happened in the first plot? It looks like we had a fairly chunky short exposure (betting against) throughout the trading. At its worst we were short 175 contracts, which is 17 times more than what we were offering to trade. So there was an imbalance in terms of which side of our offering was getting executed.
As you can see from the odds plot, the price of the contract (inverse of the odds) trended upwards throughout the final hour before the race, with a few range-bound moves closer to the start. That explains us losing money and also kind of explains us selling too many contracts: in a trending market, there would be more matching on one side of the book, hence a naive strategy that doesn't account for that would quickly get out of balance. Or at least that's what it looks like. There are many ways to read this.
This is the previous plot, but zoomed in towards the part where we lost money the fastest. The orange line shows the best available lay odds and the blue one is the back odds (both of which are where our simulated bot also puts its bets).
The green line in the middle is the volume-weighted average price, or the microprice. It's calculated as follows: $$ \text{VWAP} = \frac{\text{BBPrice} \times \text{BLVolume} + \text{BLPrice} \times \text{BBVolume}}{\text{BBVolume} + \text{BLVolume}} $$
Essentially, if the volume of available bets on one side is bigger, then that side kind of pushes the microprice towards the opposite side. It's a good way to visualize order book imbalance and, interestingly, in that plot, sometimes a move in the actual best back/lay odds can be anticipated by the microprice moving towards one side or the other. We'll look into that later.
The crosses are the bot's orders getting executed: red means the back offer got executed (thus the bot has secured a lay bet and is now more short the market) and green means the lay offer got executed. The dots are the same, but for other people's orders.
What's strange here is that our orders often get hit shortly before there's a price move against us: for example, in the beginning, just before the best back moves from 3.80 down to 3.75 (this is still a 1-tick move), you can see several executions of the bot's back orders, with the bot quickly replacing them several times in between executions. In essence, it's sort of trying to "hold the line" against a market move, but ultimately fails and ends up with having bet against the market whilst the odds managed to move through it. Well, at least we know it's a market move in hindsight: perhaps more people could have joined the bot on its side and maintained the price.
Essentially, the bot does well during times when there's a lot of matching going on on both sides of the book and the price isn't moving. This happens, for example, for a few minutes around 12:30:
But even in this case, this doesn't look too inspiring: the initial 1-pound jump in the PnL at 12:29 is because of the odds moving up and us having a short exposure. The actual market-making part between 12:29 and 12:33 only makes about £0.10.
So in the end, looks like it's back to the drawing board.
Not all was lost, obviously. I had a backtesting engine, a half-working strategy (in that it didn't immediately start bleeding money) and I had an idea of what was going wrong and some possible ways to fix it. And obviously no real money had suffered yet in the making of this.
Next time on project Betfair, we'll start reading some actual papers on market making and tinkering with the bot in attempts to make it lose less money. Perhaps I'll tell about what was happening to the Scala version (that would later supposedly be a production version of this thing) as well. Stay tuned!
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
Probably the hardest thing about writing this will be piecing together what I was doing and thinking at the time. Keeping a development diary was a great idea, it's just a shame I started it 1/3 of the way into the project and the rest is in various commit messages.
I would never have been able to get away with these commit messages at my previous job.
In any case, I started with how everything starts: gathering data.
Good news: Betfair has some APIs. Bad news: it costs £299 to access the API. Better news: I had already gotten an API key back in 2014 when it was free and so my access was maintained.
The first Betfair API I used is called API-NG. It's a REST API that allows to do almost everything the Web client does, including getting a (live or delayed) state of the runner order book (available and recently matched odds), getting information about given markets and runners, placing actual bets or getting the account balance.
Since naming things is one of the hard problems in computer science, I decided to pick the Daedric pantheon from The Elder Scrolls to name components of my new trading system, which would be written in Python for now.
This was pretty much where I paused the design process: downstream of Hircine we would have some sort of a component that would determine whether to place a bet on a given market, but I didn't want to make a decision on what would happen later, since I wanted my further design decisions to be driven by looking at the data and seeing what would and wouldn't work.
I had decided to concentrate on pre-race greyhound and horse markets. Pre-race markets somehow still have dramatic odds movements (despite that there's no information becoming known that would affect the race outcome), however odds don't move by more than 1-2 ticks at a time (unlike football, where a goal can immediately halve the implied probability for some markets, like the Correct Score market). In addition, in-play markets have a bet delay of 1-10 seconds depending on the sports type, whereas for the pre-race markets the submitted bet appears in the order book immediately.
In terms of numbers, there are about 30 horse races in the UK alone on any given day (and those usually are the most liquid, as in, with a lot of bets being available and matched, with action starting to pick up about 30 minutes before the race begins) and about 90 greyhound races (those are less liquid than horses with most of the trading happening in the last 10 minutes before the race begins).
Now, I could either beat other people by making smarter decisions — or I could beat them by making dumb decisions quicker. I quite liked the latter idea: most high-frequency trading strategies are actually fairly well documented, I imagined that "high frequency" on Betfair is much, much slower than high frequency in the real world, and making software faster is better-defined that making a trading model make more money.
And, as a small bonus, trading at higher granularities meant that I didn't need as much data. If my strategy worked by trying to predict, say, which horse would win based on its race history or its characteristics, I would need to have collected thousands of race results before being able to confidently backtest anything I came up with. In a higher frequency world, data collection would be easier: there's no reason why the microstructure of one horse racing market would be different from another one.
The simplest one of those strategies would be market making. Remember me lamenting in the previous post that buying a contract and immediately selling it back results in a ~0.5% loss because of the bid-offer spread? As a market maker, we have a chance of collecting that spread instead: what we do is maintain a both buy (back) and a sell (lay) order in hopes that both of them will get hit, resulting in us pocketing a small profit and not having any exposure, assuming we sized our bets correctly. It's a strategy that's easy to prototype but has surprising depth to it: what if only one side of our offer keeps getting matched and we accumulate an exposure? What odds do we quote for the two bets? What sizes?
With that in mind, I realised that the way I was collecting data was kind of wrong. The idea of the strategy output being the desired position at a given time is easily backtest-able: just multiply the position at each point by the price movement at the next tick and take the cumulative sum. The price in this case is probably the mid (the average of the back and the lay odds) and for our cost model we could assume that we cross the spread at every trade (meaning aggressively take the available odds instead of placing a passive order and waiting to get matched).
Sadly, this doesn't fly with a market-making strategy: since we intend to be trading passively most of the time, we have to have a way to model when (or whether: while we're waiting, the price could move underneath us) our passive orders get matched.
If we have to delve into the details of market microstructure, it's worth starting from describing how a Betfair order book actually works. Thankfully, it's very similar to the way a real exchange order book works.
When a back (lay) order gets submitted to the book, it can either get matched, partially matched, or not matched at all. Let's say we want to match a new back (lay) order N against the book:
It's easy to see that to get our passive order matched, we want to either offer a price that's better than the best price in the book (highest odds if we're offering a back (laying) or lowest odds if we're offering a lay (backing)) or be the earliest person at the current best price level. One might think of matching at a given price level as a queue: if we place a back at 1.70 (thus adding to the available to lay amount) and there is already £200 available at that level, we have to wait until that money gets matched before we can be matched — unless while we are waiting, the best available to lay odds become 1.69 and so we'll have to cancel our unmatched bet and move it lower.
So how do we use this knowledge to backtest a market-making strategy? The neatest way of doing this that I found was taking the order book snapshots that I collected and reconstructing a sequence of order book events from it: actions that take the order book from one state to the next one. Those can be either back/lay order placements (possibly triggering a match) or cancellations. If these events were to be be plugged back into the order book simulator, we would end up with order book states that mirror exactly what we started with.
However, if we were to also insert some of our own actions in the middle of this stream, we would be able to model the matching of passive orders: for example, if the best available lay at a given point in time was 1.70 and we placed an order at the better odds (for others) at 1.69, all orders in the event stream that would have matched at 1.70 would now be matching against our new order, until it got exhausted. Similarly, if a new price level was about to be opened at 1.69 and we were the first order at that level, all orders would be matching against us as well.
So I set out to convert my collected snapshots into a series of events. In theory, doing this is easy: every snapshot has the odds and volumes of available back and lay bets, as well as the total amount traded (matched) at each odds. So, to begin, we take the differences between these 3 vectors (available to back, available to lay, total traded) in consecutive snapshots. If there's no traded difference, all other differences are pure placements and cancellations without matches. So if there's more available to back/lay at a given level, we add a PLACE_BACK
/PLACE_LAY
instruction to the event stream — and if there's less, we add a CANCEL_BACK
/CANCEL_LAY
instruction for a given amount. Since we don't know which exact order is getting cancelled (just the volume), the order book simulator just picks a random one to take the volume away from.
Things get slightly more tricky if there's been a trade at a given odds level. If there's only been one order book action between the snapshots, it's fairly easy to infer what happened: there will be a difference in either the outstanding back or lay amounts at those odds, meaning there was an aggressive back/lay that immediately got matched against the order. Sometimes there can be several odds levels for which there is a traded amount change — that means an order was so large that it matched against orders at several odds. After the traded amount differences are reconciled, we need to adjust the available to back/lay differences to reflect that and then proceed as in the previous paragraph to reconcile unmatched placements/cancellations.
There was a problem with the data I collected, however: API-NG limits the number of requests to it to about 5 per second. Worse even, despite that I was making about 3 requests per second, the API would still sometimes throttle me and not give back an HTTP response until a few seconds later. This meant that order book snapshots had more than one event between them and in some cases there were way too many possible valid sets of events that brought the order book from one state to the other.
Thankfully, a slightly more modern way of getting data was available.
Unlike its brother, API-NG, the Exchange Stream API is a push API: one opens a TCP connection to Betfair and sends a message asking to subscribe to certain markets, at which point Betfair sends back updates whenever an order placement/cancellation/match occurs. Like its brother, the Stream API uses JSON to serialise data as opposed to a binary protocol. Messages are separated with a newline character and can be either an image (the full order book state) or an update (a delta saying which levels in the order book have changed). The Stream API can also be used to subscribe to the state of user's own orders in order to be notified when they have been matched/cancelled.
This was pretty cool, because it meant that my future market making bot would be able to react to market events basically immediately, as opposed to 200ms later, and wouldn't be liable to get throttled.
Here's an example of a so-called market change message:
{"op":"mcm","id":2,"clk":"ALc3AK0/AI04","pt":1502698655391,"mc":[
{"id":"1.133280054","rc":[
{"atb":[[5.7,5.7]],"atl":[[5.7,0]],"trd":[[5.7,45.42]],"id":12942916}]}]}
This one says that in the market 1.133280054, we can't bet against (lay) the runner 12942916 at odds 5.7, but instead can bet up to £5.70 on it. In addition, now the total bets matched at odds 5.7 amount £45.42 (actually it's half as much, because Betfair counts both sides of the bet together). In essence, this means that there was an aggressive lay at 5.7 that got partially matched against all of the available lay and £5.70 of it remained in the book at available to back, basically moving the market price.
As for the other parts of the message, pt
is the timestamp and clk
is a unique identifier that the client at subscription time can send to Betfair to get the whole market image starting from that point (and not from the point the client connects: this is used to restart consumption from the same point if the client crashes).
Another program had to be added to my Daedric pantheon. Sanguine, "whose sphere is hedonistic revelry, debauchery, and passionate indulgences of darker natures", would connect to the Stream API, subscribe to the order book stream for a given market and dump everything it received into a text file. It was similar to Hermaeus, except I had given up on creating a proper schema at this point to store the data into PostgreSQL (and even in the case of Hermaeus it was just the metadata that had a schema, the actual order book data used the json type). Still, I was able to schedule instances of it to be executed in order to record some more stream data dumps.
And indeed, the data now was much more granular and was actually possible to reconcile (except for a few occasions where some bets would be voided and the total amount traded at given odds would decrease).
Here's some code that, given the current amounts available to back, lay and traded at given odds, as well as a new order book snapshot, applies the changes to our book and returns the events that have occurred.
def process_runner_msg(runner_msg, atb, atl, trds):
events = []
# represent everything in pennies
back_diff = {p: int(v * 100) - atb[p] for p, v in runner_msg.get('atb', [])}
lay_diff = {p: int(v * 100) - atl[p] for p, v in runner_msg.get('atl', [])}
trd_diff = {p: int(v * 100) - trds[p] for p, v in runner_msg.get('trd', [])}
for p, v in trd_diff.iteritems():
if not v:
continue
if v < 0:
print "ignoring negative trade price %f volume %d" % (p, v)
continue
if p in back_diff and back_diff[p] < 0:
back_diff[p] += v / 2 # betfair counts the trade twice (on both sides of the book)
atb[p] -= v / 2
trds[p] += v
events.append(('PLACE_LAY', p, v / 2))
elif p in lay_diff and lay_diff[p] < 0:
lay_diff[p] += v / 2
atl[p] -= v / 2
trds[p] += v
events.append(('PLACE_BACK', p, v / 2))
elif p in back_diff:
back_diff[p] += v / 2
atb[p] -= v / 2
trds[p] += v
events.append(('PLACE_LAY', p, v / 2))
elif p in lay_diff:
lay_diff[p] += v / 2
atl[p] -= v / 2
trds[p] += v
events.append(('PLACE_BACK', p, v / 2))
else:
print "can't reconcile a trade of %d at %.2f" % (v, p)
# these were aggressive placements -- need to make sure we're sorting backs (place_lay) by price descending
# (to simulate us knocking book levels out) and lays vice versa
events = sorted([e for e in events if e[0] == 'PLACE_LAY'], key=lambda e: e[1], reverse=True) + \
sorted([e for e in events if e[0] == 'PLACE_BACK'], key=lambda e: e[1])
for p, v in back_diff.iteritems():
if v > 0:
events.append(('PLACE_BACK', p, v))
elif v < 0:
events.append(('CANCEL_BACK', p, -v))
atb[p] += v
for p, v in lay_diff.iteritems():
if v > 0:
events.append(('PLACE_LAY', p, v))
elif v < 0:
events.append(('CANCEL_LAY', p, -v))
atl[p] += v
return events
Parallel to this, I was also slowly starting to implement an Exchange Stream API client that would subscribe to and handle market change messages in Scala. This contraption, named Azura (whose sphere is "the period of transition and change"), would form the base of the actual live market making bot and/or any other trading strategies that would use the Stream API.
Next time on project Betfair, we'll start implementing and backtesting our market making bot, as well as possibly visualising all the data we collected.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
Like I had said in a previous post, I tried my hand at automated trading on Betfair for a couple of months. While I didn't make much money from it, I still think there are a lot of cool things I could write about. Here's a rough outline of what you will be able to read about in this series when I'm done:
Let's get into it, shall we?
Betfair is a betting exchange, similar to other betting exchanges (BetDAQ, Smarkets etc). Unlike a classical bookmaker (say PaddyPower or Ladbrokes), Betfair allows one to bet against an outcome (a lay bet) as well as for it. In addition, all bets on an exchange are offered by other participants in the market, not Betfair itself.
This allows people to make money not only by betting on something (essentially having a better judgment of the likelihood of a given event than other participants) but also by trading (locking in a profit by backing and laying at different odds as they move through time).
The exchange consists of events (such as, say, a given match between Arsenal and Huddersfield). Events consist of markets (such as the Match Odds market or the Exact Score market), which consists of runners (for a Match Odds market, the runners are Huddersfield, Arsenal and Draw. For the Exact Score market, it's all the scores from 0-0 to 3-3 + any Unquoted).
Arsenal vs Huddersfield event view
Runners are probably named like that because most gambling used to be on horses. In essence, runners are almost always an exact set cover of the outcomes in a given market (they're disjoint and account for all outcomes). Almost, because Betfair has "Dead Heat" rules regarding what happens to bets when, say, two horses cross the finish line at the same time.
Inside a runner, we have an order book: the odds one can take to back or lay a given runner, as well as the amount of money one can put on it. Since when backing, we always want bigger odds (if I put £1 on something at 2.0, I get back £2 if it wins and if the odds are 3.0, I get back £3 while risking the same amount of money), back offers with higher odds take priority and get matched first.
Note that a back offer appears in the order book if and only if there's someone offering to take the other side (i.e. lay the runner). When laying, lower odds are obviously better (by laying £1 pound, I get £1 pound no matter what, but, if the runner wins, I will have to pay back more with higher odds).
Order book for Arsenal winning the match
This is similar to a real exchange order book where, slightly counter-intuitively, the odds are prices and the money we're betting on something are volumes (in fact, that's how the Betfair API internally refers to order book lines).
Unlike a real-world exchange, however, the odds have varying granularities and aren't in one-cent increments. One can bet at odds of 1.01, 1.02, ..., 2.02, 2.04, ..., 3.05, 3.10, ... This is to reflect that at higher odds, risks/rewards are higher for the same bet size and so a move from odds of 1.01 to 2.0 is similar to a move from 500.0 to 1000.0. More information here.
When betting, we're looking at odds and payoffs. This is different from trading a financial instrument, where we think about it in terms of our position. For example, if I'm long an Apple share, I can sell it, thus having 0 shares and my net worth now being independent of Apple's performance, or sell 2 shares, becoming short and benefiting from Apple shares dropping in price.
It would be nice to produce something similar for betting: that way, when we're trading, we can operate with familiar terms (I'm long 1 contract on this market, I sell 1, I'm now neutral: no matter whether the horse/football team loses or wins, the money I have at the end doesn't change).
Let's think of Betfair bets in terms of one-pound contracts (or an OPC): when a given event has ended, such contract can have a value of either £1 (if say the horse we're betting on has won) or £0.
A normal bet can be either a back (betting for) or a lay (betting against). For a back of \( b \) at odds \( o \), the payoff is \( b * o \) if the runner wins and 0 if the runner loses. The actual contract costs us \( b \).
For a lay of \( b \) at odds \( o \), the payoff is opposite: \( -b * o \) if the runner wins and 0 if the runner loses — but we get \( b \) from selling the contract in any case.
How do we transform between the normal odds world and the OPC world? Let's say we back an outcome by putting \(b \) pounds on it at odds of \( o\). This is the same as buying \(b * o\) OPCs at the price \(\frac{1}{o}\) each. Hence:
Laying a runner for \( b \) at \( o \) is the same as selling \( b * o \) OPCs at the price of \(\frac{1}{o} \) each: we immediately get \(\frac{1}{o} * b * o = b \) pounds and:
Note that the back outcomes and the lay outcomes are inverses of each other: one's gain is another's loss, that is, betting is a zero-sum game.
What does this price mean? It's actually the implied probability of a given runner winning. Consider the expected payoff of backing a runner: it's \( (1 - p) * 0 + p * (b * o) - b = p * b * o - b \). On average, we would expect to make zero money from this (otherwise we could easily make money by backing a given runner all the time). Hence \( p = \frac{1}{o} \).
So essentially we can use the price of an OPC and the implied probability interchangeably and then:
It's possible to offset a back bet with a lay bet to, say, lock in a profit or a loss (also known as hedging or greening-up: no matter who wins, we get or lose the same amount of money). Using the OPC framework, however, this becomes much more intuitive.
Let's say we're trading a runner where the odds moved from \( o_1 \) down to \( o_2 \). We have placed a back bet of \( b_1 \) at \( o_1 \). Hence we have bought \( b_1 * o_1 \) OPCs for \( b_1 \).
When the odds become \( o_2 \), we want to sell these OPCs: we get \( \frac{b_1 * o_1}{o_2} \) pounds for them (in Betfair world, we have placed a lay bet of \( \frac{b_1 * o_1}{o_2} \) at the available odds of \( o_2 \)). Now we have \(\frac{b_1 * o_1}{o_2} - b_1 \) pounds and 0 OPCs, so we are market neutral in our framework.
Does this actually work? If the runner wins:
If the runner loses:
So the payoff is the same in both cases.
On Betfair, one can often bet on a conjunction of disjoint events (events in the probability theory sense, not in the Betfair sense) as well as as each of those events separately. For example, Arsenal winning a given football match is the same as Arsenal winning with a score of 1-0 or Arsenal winning with a score of 2-0 or Arsenal winning with a score of 2-1 etc. Technically, the available bets in that case only go up to 3 goals on each side, but outcomes beyond that are fairly rare (and in the proposed strategy we end up better off with an unquoted outcome).
Assuming these two events (Arsenal winning at all and Arsenal winning with either of the available scores) are the same, their implied probabilities and hence prices should also be the same. So it's possible that we can sometimes lock in a small risk-free profit by backing the total outcome and laying each one of its constituents.
How do we calculate whether this arbitrage exists?
Let's say an event \(E\) is a conjunction of disjoint events \(E_1 ... E_k\). If we can bet on these events with implied probabilities (prices) \(p, p_1, ... p_k\) we can buy 1 OPC on \(E\) for \(p\) and then immediately turn around and sell \(k\) OPCs on \(E_i\) for \(p_i\) each (getting \(p_1 + p_2 + ... + p_k\) in total).
Since these events are disjoint, if \(E\) happens, only one of \(E_1, ..., E_k\) happens. Then the contract on \(E\) becomes worth £1, the contract on one and only one \(E_i\) is worth -£1 and the rest are worth £0, hence we're only left with the money from the original trade (the contracts offset each other).
If \(E\) doesn't happen, all OPCs expire worthless. Hence we're market-neutral.
The money we get from buying a contract and immediately selling its constituents is \(-p + p_1 + p_2 + ... + p_k\). So if \(p\) is greater (or less) than the constituents, there's an arbitrage.
In fact, in this case, if the first team wins with an unquoted score (i.e. above 3), then neither of \(E_i\) happens and we pocket a profit of \(p_1 + p_2 + ... + p_k\).
In practice, there are a few real-world issues that make this unrealistic, such as bid-offer spreads. If we back something and then immediately lay it, we lose money, since there's a difference between the best available back odds and best available lay odds. Even with the smallest spreads (for example, backing at 1.71 and laying at 1.72), the cost is about 0.5%. Worse even, Betfair charges a 5% commission on the winnings in a given market: since in our case, we're trading 2 markets (Match Odds and Exact Score) with only one of them winning, we have to have an expected profit of 5% just to break even.
Thanks for reaching the end of this math-heavy bit! Next up, we'll start collecting data from the Betfair Stream API and things might begin moving faster.
Posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed . Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism (the only JavaScript is MathJax on this and another page), ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
It's been more than 3 months since my last post. Have I finally sunk into the depths of depression and debauchery since leaving my job? Do I start my day by waking up at 11am and having a mimosa? Do I end it with a can of Pringles at 3am, going through a box set of The Wire?
As if.
I tried my hand at automated trading on Betfair, which is a sports betting exchange. Automated trading on Betfair is similar to automated trading in real life, except the stakes are lower, the taxes are none and the APIs are more pleasant.
In the process of getting this whole system to work, I managed to do many interesting things:
In fact, even if it did make money, I think there's a lot of interesting stuff and pretty pictures to look at and write about. Perhaps one day I even will.
I read some Bitcoin and Ethereum whitepapers. It's a pretty cool idea.
You may have noticed that it looks like blog isn't statically generated by Hakyll any more. That's because it isn't.
One day when I was taking a break from my Betfair adventures, I decided to reorganise all the stuff in my private diary (that I use DokuWiki for) and realised that it would be cool to:
At least that's what I think my thought process was now, back then it was more like "what if I learned Django?".
So I made Kimonote, which I guess is a note-organizer-slash-blogging-platform that focuses on minimalism and ease of navigation. Wanna see my previous post tagged 'programming'? It's there, on the left, in the sidebar. Previous post on the blog? It's also there. News from Paddington? Here. News from Paddington, but in a format that allows you to consume the whole history without clicking "next"? Sure. Project Morrowind and my posts on Raspberry Pi from before I went to university? No problem.
It's Javascript-free as well and uses basscss for styling. This means this whole page takes up 20KB, about 100 times less than a small Medium blog post.
Interested? Head on to the landing page: there's a sign-up link there that lets you... no, not sign up, but leave your email so I can see if it's worth making it more than a place where my blog lives.
In the middle of 2015 I moved to London for my first job and wrote a few Facebook posts about my new life. As I found some of my friends quoting them at random, I decided to continue documenting my slow descent into madness, culminating in about 10k words worth of rambling. I finally (as of mid-2017) cleaned them up and made them available to a wider audience.
Best viewed like this.
...By about 10am I had made the decision to throw the badminton rackets away. It's amazing how much extra stuff you become willing to let go of when your mission is to pack the contents of a large room into a suitcase that's much smaller than the room. Pure physics, really.
Before that, after a couple of failed rounds of Sokoban, I also had to say goodbye to a sleeping bag that in its rolled-up shape took up half of the suitcase. It only feels like yesterday I picked you up from Argos, I told it as I dropped it off into the clothes donation bin. Leicester won the Premier League on that day. David Cameron was Prime Minister and Hillary Clinton was making her way through the Democratic primaries. I'll miss you, old buddy.
It didn't respond. Probably trying to choke back its tears.
The cocktail shaker used to dispose of random bottles of alcohol in the fridge in a civilised way also will have to stay here. So will all the random cutlery and the massive shelf (yes, we brought a shelf here. No, I'm not taking it with me). So will the ice trays.
Remember that time I forgot to defrost the freezer and you ended up stuck inside a massive block of ice, I ask them. Isn't it ironic? Instead of having ice inside the ice trays, I ended up with ice trays inside of ice. It took me a whole weekend to get you out. Was it worth it?
As I shut the fridge door, I think I hear them say "Retribution". I open the door again and stare at them. Probably nothing.
I take one last glance around the room to make sure I disposed of all incriminating evidence. Minutes later, key handed over, I'm on a train.
THE END
Chatbots are basically a clunky commandline interface to things that sometimes really need a custom UI.
But what if I do want to make something that uses some of the features (a GPS receiver, a camera, a microphone) that a modern phone has? I'm too lazy to write device-specific code and dig around in the intrinsics of Android/iOS APIs.
So after doing some research on modern messenger apps (I kind of fell behind on what was going on after the WhatsApp acquisition and turns out billions more have popped up since then) I stumbled upon the Telegram bot API. And it's actually pretty simple. In a nutshell, you create a bot (by messaging another bot) which gives you a token. The token is the only thing your bot needs to communicate with the Telegram servers (so no coding up handshakes or managing session keys): it makes up your REST endpoint that you can throw queries at. The connection is over SSL, so that takes care of your ISP or a kid with a WiFi dongle and Wireshark grabbing hold of your token. Bot chats aren't end-to-end encrypted though, so Telegram are still able to read whatever you talk to the bot about.
With that in mind, receiving messages is easy: just shoot a GET request at https://api.telegram.org/bot(TOKEN)/getUpdates
(reference) and it will come back with a JSON-serialised list of events (message sent, message edited etc) that happened to your bot. Each one has a unique sequence number which you can use to seek around in the update log (to say get updates only starting from the last one you processed). Updates related to messages have in them a chat ID identifying your conversation with a given user -- and you include that chat ID in your POST requests to https://api.telegram.org/bot(TOKEN)/sendMessage
(reference) in order to send messages back to that user.
You can also send around various other things besides text messages, like locations (latitude-longitude pairs), photos (your bot gets some links to various-sized thumbnails of the photo), contacts etc.
So I managed to write Indiana, a treasure hunt bot that comes up with a random location inside Hyde Park (well, the rectangle whose all 4 points lie within Hyde Park and yes, that means it can sometimes put the treasure in the water or in some restricted areas and I take no responsibility for you ending up there) and, when sent a location, replies back with a rough estimate of how far the treasure is. Sort of like Pokemon Go without having to lug around an extra power pack. Note you also can send the bot a manual location -- it can't distinguish between that and a physical location read from GPS (thankfully).
So that you don't think that the 1-year delay in posting part 6 was due to me manually drawing the population heatmaps in Paint, I finally split the code I used to produce all the plots into a set of modules and uploaded them to GitHub. You'll need the usual scientific Python stack (NumPy, SciPy, matplotlib as well as PIL) and a C++ compiler. Since I wasn't sure if it's a good idea to post the game data dump that I produced, you'll have to make it yourself: you'll need the original Morrowind.esm data file and Enchanted Editor (instructions on how to produce the dump are in the README).
With all that in mind, I've run the code end-to-end and it spit out a similar set of images to what I have on the blog, which makes me incredibly happy.
Now it's time to get back to Cookie Clicker!
I told you I'd be back in a year's time.
With Aryon safe back in his tower and with all inhabitants of the island maximising the efficiency of their travel, it was time to approach a new challenge and create some more pretty pictures. The next question was simple: where the hell are all the people and what do they do?
Let's try and use our cool matrix that converts in-game coordinates to coordinates on a map to its full extent and create some sort of a population heatmap. This isn't difficult to do since we already have all the pieces of the puzzle: we know where all the NPCs are located and what their occupation, race and gender are. The only problem is dealing with NPCs that are in the interior: remember how interiors are completely separate mini-worlds? This means that we can't simply infer someone's location in the exterior by taking the coordinates of the two doors and adding up an offset of the NPC from the door, since interiors often are bigger on the inside than what they look like from the outside. Since we'd only be looking at a world-scale overview, I decided not to bother with precision: the actual exterior location of an NPC is simply the location of the closest exterior door they can get to (by number of cells they have to traverse to get outside).
Armed with these tools, I went through all the NPCs in the world, getting their exterior location, and converted that location into coordinates on the map. I had a map-sized matrix where I accumulated those coordinates: the number at each pixel was the number of NPCs whose exterior coordinates fell within that square. This meant that I'd get disproportionately large amounts of people piling up at the doors of densely-populated interiors, which wasn't optimal as it was difficult to see on the image (after all, it's just one pixel) and wasn't representing the in-game reality well: after all, we are interested in the population in a given city/region and people don't really stand in one spot either, instead roaming around.
Hence I applied a Gaussian blur to my matrix so that instead of 10 people assigned to one pixel we'd be looking at something like 2.2 people on that pixel, 1.1 people one pixel away, 0.5 people 2 pixels away etc. If this feels like chopping people into parts and throwing those body parts around so they form a nice hill, it's because it kind of is.
With that out of the way, I normalised the matrix so that all values were between 0 and 1, applied one of the numerous colormaps that matplotlib has (I quite liked the one called blues) and blended it with the original map. I also toyed around with applying a transfer function to the inputs before pushing them into the colormap since I didn't like the way it looked by default -- I chose a logistic function:
\[ f(t) = \frac{1}{1 + e^{-k(t-c)}} \]
I didn't really have a methodology here: varying $k$ changes the steepness of the curve (how quickly things go from the left side of the colormap to the right side, getting brighter) and varying $c$ changes where it's centered, so I tinkered with them for each picture until it looked good.
With that in mind, let's see what we ended up with!
draw_npcs(filter_sigma=25, sigmoid_k=8, sigmoid_c=0.2, output='map_population.png')
(full)
We get dark blobs in large population centres like, bottom to top, Vivec (and Ebonheart next to it), then Balmora (southwestern part of the island), Sadrith Mora (far east), Ald'ruhn (north of Balmora) and Gnisis (northwest of Ald'ruhn). There are also some minor places highlighted around -- these are either smaller settlements or larger dungeons/strongholds/shrines.
What else can we do with it? How about mapping out all the Dark Elves? Easy, just don't go through all the NPCs:
draw_npcs(filter_sigma=25, mark_npcs=[n for n in npcs if n.race == 'Dark Elf'], sigmoid_k=8, sigmoid_c=0.2, output='map_population_darkelf.png')
(full)
Yes, it looks just like the population heatmap. How about seeing where they are overrepresented or underrepresented? We can divide the two overlays by one another to essentially get fractions of Dark Elves amongst the population:
draw_npcs(relative=True, filter_sigma=50, mark_npcs=[n for n in npcs if n.race == 'Dark Elf'], sigmoid_k=4, sigmoid_c=0.5, output='map_population_darkelf_relative.png')
(full)
I did have to play around with the parameters for this one (increasing the blur radius and moving the centre of the sigmoid to 0.5), but we can sort of see how the Dark Elves (natives of Morrowind) are less represented in the southwestern part of the island (which is more cosmopolitan and welcoming towards foreigners) and more represented in the eastern territories as well around the Ashlander camps (which almost completely consist of them).
What else can we do? Morrowind has slavery! Let's find out where all the slaves are concentrated:
draw_npcs(relative=True, filter_sigma=25, mark_npcs=[n for n in npcs if n.class_name == 'Slave'], sigmoid_k=8, sigmoid_c=0.2, output='map_population_slave_relative.png')
(full)
No blobs around big cities and towns -- which makes sense since this is a relative fraction. Instead what we have highlighted for us are random dungeons and plantations around the world where slaves are held, including Abebaal Egg Mine or Dren Plantation or some slave markets or Rotheran or Hlormaren (interestingly, for the latter the blob (west of Balmora by the sea) is west of the actual stronghold -- this is because the slaves are held in sewers from where the exit is around there).
Of course we would never use this tool for our own selfish purposes:
draw_npcs(relative=True, filter_sigma=50, mark_npcs=[n for n in npcs if n.is_female], sigmoid_k=12, sigmoid_c=0.7, output='map_population_female_relative.png')
(full)
There are very few places on the island where females are overrepresented (note I set the centre of the sigmoid at 70%) -- the only one of them that's a town is Tel Mora in the northeast. That's because the councilor of that town "does not enjoy the presence of men" and all residents of that town are indeed women. Another place is Odirniran in the southeast, a Telvanni stronghold under attack by House Hlaalu. Northwest of that we have Assu with two sorceresses and north of that is Tel Uvirith -- a stronghold that gets built for the player as part of the Telvanni questline. It's disabled at the start of the game (and is invisible), but the scraper obviously didn't care about that.
Next year on project Morrowind, I promise I'll actually get around to cleaning up the source code that was used to make all this and releasing it. Promise.
A couple of days of this site being up and I've already made some friends.
I used a simple lighttpd log analyzer called Visitors (that apparently was written by antirez of Redis fame!) to see who was actually looking at my website, and why.
And of course besides a couple visits from people who got here from Facebook and a dozen visits from myself to bump up the numbers, there was someone with a Japanese IP probing around for various Web control panels on all the URLs people usually put their control panels on.
How did this happen? My guess is there's a bot somewhere that sends out HTTP GET requests to everything in a given IP address range (note they didn't use a domain name) -- if the host responds with a nice page, it can then probably try poking one of the hundreds of vulnerabilities that say phpMyAdmin has.
So this is one of the benefits of this blog being just a static set of HTML files generated from some Markdown.
This is sort of similar to when one has sshd running with the default settings on a publically-routable machine and then gets really surprised when their authentication logs show hundreds of attempts per day to ssh in as root. This is why people recommend changing the SSH port from the default one -- not because it boosts security, but because it reduces the number of nuisance entries in the log.
There aren't many of these and I'm enjoying the company for now, but I'm considering redirecting requests to these URLs to something naughty. And would have I been able to see and enjoy all of this if I just popped my blog on GitHub Pages or something?
After a few days of tinkering with Hakyll and migrating my pages from WordPress and making sure everything looks okay, as well as dealing with stuff like compiling Haskell on a box with 512MB RAM and buying the cheapest domain name I could find, this is my new home -- a statically-generated blog. All new posts (ha-ha, as if you expected any) will go here, though some images will still be hosted on WordPress. Let’s see how it goes.
And here's an equivalent announcement on WordPress that links back to this blog.
Look what I found in my drafts folder. Welcome back to project Morrowind.
The nice visualization of where Aryon could be was very close now. I went with the stupidest approach: go through all pixels on the map, convert each one into a point in the game world and find how long it would take Aryon to get there (by using the method I mentioned previously: go through all points in the graph we know the shortest travel time to and find the one for which the total travel time (shortest time to travel to that point + time to walk from that point to the destination) is the smallest).
Except I forgot this was Python and I was going to go through, for each point on the map, about 2400 possible routes through exterior points. And there were 1650x1900 = about 3 million points. Sure, I could be smart about it and use various optimisations (like coalescing exterior points that are close enough to each other and treating them as one or exploiting the triangle inequality (as mentioned in the previous post) or looking at 2x2 blocks on the map instead of each pixel or using all 4 cores of my CPU instead of one). Or I could farm it out to a C++ program.
So I dumped the list of known exterior coordinates and times of the shortest routes to those to a file as well as the in-game coordinates of the 3-ish million points on the map I was interested in. The program would take those and spit out, for each sought coordinate, the shortest time it would take for Aryon to get there from his tower. In fact, it took 40 lines and ran in about 10 seconds. It's pretty amazing how fast you can be if you speak to the bare metal.
I then used matplotlib's contour plot to visualize the heatmap I got. I didn't manage to get it to actually overlay on the map in the map's original resolution, but the wizards were still extremely impressed and said that I should speak to them whenever I was interested in seed funding for my startup.
So this actually makes sense. There's a 2h circle around Aryon's home (northeast portion of the island) from where he could either walk or teleport to Wolverine Hall through Divine Intervention (an island east of Vvardenfell). Wolverine Hall has a Mages' Guild, so that means he could instantaneously get to four other major towns (a blob along the west edge of the island). So there are quite a few places he could get in 2 hours!
After that, he would have to take the Silt Strider or a boat, which would slow him down. In 4 hours he would barely be able to reach Gnisis (northwest corner of the island) or Maar Gan (the little arc at the top of the 4h contour around the main population centres). He, of course, could walk from his original location for 4 hours but he wouldn't get very far.
In 6 hours he could be anywhere on the island and in 8 he would be able to reach the northern edges of Dagon Fel, a small island north of Vvardenfell. Finally, in about 11 hours he could very possibly be having breakfast with Big Head in the most desolate corner of Morrowind. Perhaps he had some business there?
The wizards said last time they ever saw Aryon was at about 2am, so he'd been gone for almost 10 hours by that point. Luckily as we were trying to figure out if he would deliberately take the most efficient route to get as far away from his tower as possible, we heard a loud noise from a nearby wardrobe and an asleep but still alive Aryon fell out of it.
In the end, he loved my contour plot as well and hung it up on his wall. Some people say the tower steward still uses it to track down people who go missing in action during Aryon's wild parties.
Next year on project Morrowind, we'll talk about my assignment with Vvardenfell Office for National Statistics to make sense of the island's demographics.
Welcome back to project Morrowind, in which we use technology to oppress people for our own political gains.
A couple of hungover Telvanni wizards came by to my house this Saturday morning. They went to Master Aryon's tower the night before for a round of drinks, which quickly escalated to several rounds of drinks. Long story short, Aryon managed to wander away somewhere and hasn't been seen since. Worse even, a Council meeting was supposed to take place next Monday and Aryon not attending it would be disastrous.
The wizards wondered if I could map out the locations Aryon might possibly be in so they would be able to better concentrate their agents' efforts across various cities in Vvardenfell and recover him before the meeting.
Imagining all kinds of blog posts I could write about this, I agreed.
I first had to alter the weights between the edges on the travel graph, since in actual game time travel by silt strider or boat isn't instantaneous. But it's easy to calculate from the distance anyway: the speed of travel is in a game setting that defaults to 16000 units per game hour. For example, the distance between Seyda Neen and Balmora is about 55000 units, so if in the beginning of the game you decided to spend money on public transport instead of walking, you would get to Balmora and finish your first quest in less than 3.5 game hours.
Determining the walking time between locations also required some digging. The minimum walking speed in the game is 100 game units per real-world second and the game time by default flows 30 times faster than real time. So walking 16000 units would take about 16000 / 100 * 30 / 3600 = 1h20m of game time. As you see, this is not much slower than taking the silt strider and if you saw one you would realise why.
Obviously, if our travel NPC has "Guild Guide" in his class name, traveling with him doesn't take any time - because magic.
Having rebuilt the graph and re-run Dijkstra on it, we can easily determine how long it would take Aryon to reach any point in the game world, assuming he uses the fastest route. Go through all points in the graph we know the shortest travel time to and find the one for which the total travel time (shortest time to travel to that point + time to walk from that point to the destination) is the smallest.
There is an optimisation which I haven't done: we actually only care about points on the graph where we can get by any other route than plain walking. Consider this: if a shortest path to a point is formed by first teleporting to some point A, then walking to point B and then finally walking to point C (all in a straight line), why not walk from A to C directly (we're assuming here that Aryon can levitate and move between the points as-the-crow-flies, so any 3 points that are in the exterior follow the triangle inequality).
But of course just giving the Telvanni wizards a list of in-game coordinates would be a faux pas. They required a map, and a map I would provide. An affine map, of all things.
The problem here is that we want to find a way to convert a pair of pixel coordinates on the game map to coordinates in the game world. Luckily, this transformation has an important property: a line between any two points on the game map is also a line in the actual world. Such transformations are called affine: they can be composed out of primitive operations like translation, rotation, reflection etc.
The good news is, they can be represented by a matrix product.
$$ \begin{pmatrix}x_{GAME} \\ y_{GAME} \\ 1 \end{pmatrix} = M \begin{pmatrix}x_{MAP} \\ y_{MAP} \\ 1\end{pmatrix} $$
So if we have a pair of map coordinates and this 3x3 matrix M, we'll be able to calculate the actual in-game coordinates, and vice versa. The third component of the vector being 1 is an ugly hack that allows us to encode translations (movement), since otherwise the vector (0, 0) on the map would map (he-he) to the vector (0, 0) in the game. More on Wikipedia.
How do we find such a matrix? Well, we can use it to transform several vectors at the same time:
$$ \begin{pmatrix}x_{GAME, 1} & x_{GAME, 2} & x_{GAME, 3} \\ y_{GAME, 1} & y_{GAME, 2} & y_{GAME, 3} \\ 1 & 1 & 1 \end{pmatrix} = M \begin{pmatrix}x_{MAP, 1} & x_{MAP, 2} & x_{MAP, 3} \\ y_{MAP, 1} & y_{MAP, 2} & y_{MAP, 3} \\ 1 & 1 & 1 \end{pmatrix} $$
And (by inverting the matrix on the right and multiplying the whole equation by it) this can be rewritten to
$$ M = \begin{pmatrix}x_{GAME, 1} & x_{GAME, 2} & x_{GAME, 3} \\ y_{GAME, 1} & y_{GAME, 2} & y_{GAME, 3} \\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix}x_{MAP, 1} & x_{MAP, 2} & x_{MAP, 3} \\ y_{MAP, 1} & y_{MAP, 2} & y_{MAP, 3} \\ 1 & 1 & 1 \end{pmatrix}^{-1} $$
Essentially, if we get 3 sets of coordinates in the game world and on the map, we can use those to recover our mapping. These 3 points also can't be on the same line because then the determinant of the matrix of map coordinates is zero and it doesn't have an inverse.
So I picked the game coordinates of 3 locations that were fairly well spread (to minimize the error) and tried to pinpoint the corresponding pixel coordinates on the map.
In the end this is the matrix I found:
$$ M = \begin{pmatrix}185.38 & -0.43327 & -126720 \\ 1.2986 & -0.018372 & 218470 \\ 0 & 0 & 1 \end{pmatrix} $$
To test it out, I plotted the three reference points I used to calculate it (in red) as well as Aryon's initial location (in blue): the exterior door to his house is located at game coordinates (85730.77, 117960.3, 5081.284) which he matrix mapped to (1147.33, 555.21).
I can see your house from here! (the actual map comes from http://thegamersjournal.com/rpg/pc/morrowind/maps/map_rendered_m.jpg)
This edition of project Morrowind was overdue by about two months, so I sadly have to stop here. But next time I'll definitely tell you how we managed to track Aryon and save the Telvanni council from collapse.
Today on project Morrowind, we take decades of research into rendering 3D scene descriptions to beautiful photorealistic worlds and throw it away.
I finally give up on any nontrivial formatting in WordPress and hope it can't mangle text in pictures.
There are a few catches to parsing cells in Morrowind, the first one being how we can uniquely name one. It's easy with interiors, since each interior has a NAME field, like "Uncle Sweetshare's Workshop" (and that's not a joke). However, there are about three types of exteriors. The first one is cities and notable landmarks - like the example in the picture, those will have a RGNN, a NAME and some coordinates of where the massive exterior square cell is located. However, there are many Vivec cells (since Vivec is really big) and so we'll use the region coordinates as well to identify one.
Secondly, wilderness cells like other parts of the Ascadian Isles Region will be named just using that and their exterior coordinates.
Finally, there are exterior cells without neither a cell nor a region name but with coordinates - those are named Wilderness [x, y] in TES Construction Set, so let's use that as well.
Each one of these cantons is a city by itself and they are all joined by bridges. Also, it's on the water. Who wouldn't want to live here? (from http://www.uesp.net/wiki/File:MW_Map_Vivec.jpg)
The next step is parsing out the contents of each cell, which is basically an ID of an object and other data about the given instance of the reference (for example, the position, the number of hit points (for an NPC) or possible destinations (for doors or NPCs offering travel services)).
Oh, also, references can sometimes be deleted - but instead of them being removed from the data file, they are just marked as deleted. This could be because actually wiping them from the file would imply rewriting the whole file over (since all the pointers in the file would have to be recalculated), a joke now but something that would probably take up way too many resources back in 2002.
One thing that should be noted is that the actual object definitions can appear before or after they are referenced and so we have to parse the file in two passes - first recording just the reference IDs as strings and then linking those to actual Python objects.
Whew, we're done!
In [1]: mages
Out[1]: Vivec, Guild of Mages
In [2]: mages.is_interior
Out[2]: True
In [3]: mages.destinations
Out[3]: [(Vivec, Foreign Quarter Plaza, (-826.792800, 357.833600, 309.695400))]
I haven't included the locations that NPCs in the cell can take the player to (like the teleportation services) in the cell destinations' list - it only lists where the doors in the cell lead to.
The full version is at https://mildbyte.files.wordpress.com/2016/03/graph-2016-2.png, but beware - it's about 10MB large and might break your browser's assumptions about how large PNGs can get.
But even with this information, we can create cool-looking graphs. For example, I produced the picture above with GraphViz, on it the nodes are cells and they are joined with an edge if there's a door between them. The large clump in the middle is Vivec. There are some smaller clusters dotted around, being slightly smaller cities (like Balmora, Caldera or Ald'runh). There are also some hub-spoke formations in there as well, the hub being a named exterior cell and the cells joined to it being the interiors that are accessible through it - these are smaller settlements.
Yet this is not what we came here for. We want to know how to get from point A to point B while exploiting everything this world has to offer us -- not just the doors. So let's talk about how we will define the actual travel graph.
Clearly, there's an infinite number of points in the game, but we don't need to look at them all. We only need to consider our start point, our end point and all potential points of interest our travel can go through. So we can easily define the nodes in our graph:
For every object offering travel "services" (NPCs/doors), the object's location and the travel destination.
The location of every Divine/Almsivi Intervention marker.
That's it. A description of our route would then be something along the lines of "From the starting point, go to this door (point 1), go through it to a different cell (point 2), walk to the person offering travel services (point 3), travel to a different city (point 4), walk to your destination (point 5)". So let's see how the nodes in the graph can be joined.
A travel "service" provider's location (a door or an actual teleporter/silt strider driver NPC) is joined to its destination with an edge of length 0.
If two nodes are in the same cell (or both are in the exterior world), they're joined with an edge of length proportional to the distance between them (so we ignore, say, mountains in the exterior world or impassable obstacles in the interior).
Every single node is joined to the nearest Temple/Imperial Fort to it (using the straight as-the-crow-flies Euclidean distance for exteriors or the distance from the nearest exterior cell for the interiors).
With this method, I ended up with a travel graph that had 6424 vertices and 16065 teleport-only edges - that includes doors/transport services/Intervention spells but not direct within-cell travel, as it's very easy to find the distance between any two points in that case on the fly.
One interesting thing about shortest-paths algorithms is that finding the shortest path between two nodes (single-pair shortest-path) is as computationally expensive (has the same asymptotic complexity) as finding the shortest path from a fixed node to everywhere in the graph (single-source shortest-path). Intuitively, this is because our ideal path in a single-pair problem could include any point in the graph and so we are calculating the shortest path to that point from our source anyway.
Dijkstra's Algorithm works pretty well for these kinds of things, finding the shortest paths from a single source to everywhere in O(|V|²) (where |V| is the number of nodes in the graph). This can be improved by using a Fibonacci Heap to store unexamined vertices and fetch the closest ones in O(1), giving a time complexity of O(|E| + |V|log|V|). I didn't think just 6000 vertices would make the search take too much time, so didn't implement one, but perhaps will do later.
I used Aryon as a guinea pig for this experiment - he becomes your main questgiver in the latter stages of the House Telvanni questline and happens to live in a fairly isolated tower in the middle of nowhere with almost no travel services. So while you can use Mark/Recall to get to him, his quests can send you across the game world to places reaching which quickly can be nontrivial.
After unleashing Dijkstra upon this graph (which admittedly took 10 minutes, slightly too long) we get two lists: first, for each point, the weight of the cheapest (fastest in this case) route from Aryon to that point. Second, for each point, what is the previous point in the fastest route. Hence we can easily reconstruct the optimal route for a point of interest by following those links.
For example, how do we get from Aryon to Hlormaren, a Dunmer stronghold on the other edge of the island? Like this:
target
Out[35]: (Hlormaren, Dome, (384.000000, -408.000000, 384.000000))
route = chain_prev(prev, target)
route
Out[37]:
[(Tel Vos, Aryon's Chambers, (3905.517000, 2935.360000, 15752.000000)),
(Wolverine Hall, [18,3], (148881.700000, 28453.790000, 1495.193000)),
(Wolverine Hall, [18,3], (148880.000000, 28360.000000, 1464.000000)),
(Sadrith Mora, Wolverine Hall: Imperial Shrine, (-64.000000, -96.000000, 0.000000)),
(Sadrith Mora, Wolverine Hall: Imperial Shrine, (-320.000000, -224.000000, 32.000000)),
(Sadrith Mora, Wolverine Hall, (2560.000000, 4064.000000, 14240.000000)),
(Sadrith Mora, Wolverine Hall, (2560.000000, 3968.000000, 14528.000000)),
(Sadrith Mora, Wolverine Hall: Mage's Guild, (448.000000, 192.000000, 160.000000)),
(Sadrith Mora, Wolverine Hall: Mage's Guild, (-70.134480, 434.521700, 65.990490)),
(Balmora, Guild of Mages, (-755.896600, -1002.733000, -644.627900)),
(Balmora, [-3,-2], (-22130.610000, -8582.789000, 889.572800)),
(Hlormaren, [-6,-1], (-43200.000000, -3448.000000, 3072.000000)),
(Hlormaren, Dome, (320.000000, -256.000000, 402.000000)),
(Hlormaren, Dome, (384.000000, -408.000000, 384.000000))]
There's a disadvantage here in that we don't actually see the method of travel to get between nodes and so this travel plan takes some game knowledge to decipher. Basically, we want to use a Divine Intervention spell to go to the Wolverine Hall Fort, then enter the Imperial Shrine, unceremoniously walk through it into the Fort interior, enter the Mage's (sic) guild, get ourselves teleported to Balmora and then walk/fly from there to Hlormaren.
How about getting to Sarys Ancestral Tomb, which is located on a remote island on the southwest corner of the map? Easy.
[(Tel Vos, Aryon's Chambers, (3905.517000, 2935.360000, 15752.000000)),
(Wolverine Hall, [18,3], (148881.700000, 28453.790000, 1495.193000)),
(Wolverine Hall, [18,3], (148880.000000, 28360.000000, 1464.000000)),
(Sadrith Mora, Wolverine Hall: Imperial Shrine, (-64.000000, -96.000000, 0.000000)),
(Sadrith Mora, Wolverine Hall: Imperial Shrine, (-320.000000, -224.000000, 32.000000)),
(Sadrith Mora, Wolverine Hall, (2560.000000, 4064.000000, 14240.000000)),
(Sadrith Mora, Wolverine Hall, (2560.000000, 3968.000000, 14528.000000)),
(Sadrith Mora, Wolverine Hall: Mage's Guild, (448.000000, 192.000000, 160.000000)),
(Sadrith Mora, Wolverine Hall: Mage's Guild, (-70.134480, 434.521700, 65.990490)),
(Vivec, Guild of Mages, (3.520470, 1391.325000, -385.853300)),
(Ebonheart, [1,-13], (8703.056000, -100602.000000, 1383.638000)),
(Bitter Coast Region, [-5,-9], (-37659.390000, -69956.550000, 322.489000)),
(Sarys Ancestral Tomb, (7028.375000, 4415.659000, 15001.790000))]
We want to again go to the Sadrith Mora Guild and get teleported, this time to Vivec. Then we cast Divine Intervention one more time and end up in Ebonheart, which is a swim away from the island on which the tomb is located.
Next time on project Morrowind, we'll try to make the planner's advice slightly more readable by plotting it on the game map. And maybe plot other things on the map. There might even be some source code!