View: compact / full
Time: backward / forward
Source: internal /
external /
all
Public RSS / JSON
It's been two years since project Morrowind (which apparently now has been made an official speedrun category). During that time, I've been working on another exciting project and it's time to finally announce it.
Today, we are delighted to launch Splitgraph, a tool to build, extend, query and share datasets that works on top of PostgreSQL and integrates seamlessly with anything that uses PostgreSQL. It brings the best parts of Git and Docker, tools well-known and loved by developers, to data science and data engineering, and allows users to build and manipulate datasets directly on their database using familiar commands and paradigms.
Splitgraph launches with first-class support for multiple data analytics tools and access to over 40000 open government datasets on the Socrata platform. Analyze coronavirus data with Jupyter and scikit-learn, plot nearby marijuana dispensaries with Metabase and PostGIS or just explore Chicago open data with DBeaver and do so from the comfort of a battle-tested RDBMS with a mature feature set and a rich ecosystem of integrations.
Feel free to check out our introductory blog post the frequently asked questions section!
Let's consider the life of a more and more prominent type of worker in the industry: a data scientist/engineer. Data is the new oil and this person is the new oil rig worker, plumber, manager, owner and operator. This is partially based on my own professional experience and partially on over-exaggerated horror stories from the Internet, just like a good analogy should be.
Came to work to a small crisis: a dashboard that our marketing team uses to help them direct their door hinge (yes, we sell door hinges. It's a very niche but a very lucrative business) sales efforts is outputting weird numbers. Obviously, they're not happy. I had better things to do today but oh well. I look at the data that populates the dashboard and start going up the chain through a few dozen of random ETL jobs and processes that people before me wrote.
By lunchtime, I trace this issue down to a fault in one of our data vendors (that we buy timber price data from: apparently it's a great predictor of door hinge sales): overnight, they decided to change their conventions and publish values of 999999 where they used to push out a NULL. I raise a support ticket and wait for it to be answered. In the meantime, I enlist a few other colleagues and we manage to repair the damage, rerunning parts of the pipeline and patching data where needed.
Support ticket still unanswered (well, they acknowledged it but said they are dealing with a sudden influx of support tickets, I wonder why) but at least we have a temporary fix.
In the meantime, I start work on a project that I wanted to do yesterday. I read a paper recently that showed that another predictor of door hinge sales is council planning permissions. The author had scraped some data from a few council websites and made the dataset available on his Web page as a CSV dump. Great! I download it and, well, it's pretty much what I expected it to be: no explanation of what each column means and what its ranges are. But I've seen worse. I fire up my trusty Pandas toolchain and get to work.
By the evening, there's nothing left of the old dataset: I did some data patching and interpolation, removed some columns and combined some other ones. I also combined the data with our own historical data for door hinge sales in given postcodes. In conjunction with this data, the planning permission dataset indeed gives an amazing prediction accuracy. I send the results to my boss and go home happy.
This is the happiest I'll be this week.
The timber sales data vendor has answered our support ticket. In fact, our query made them inspect the data closer at which point they realised they had some historical errors in the data which they decided to rectify. The problem was that they couldn't send us just the rows that were changed and instead linked us to an SQL dump of the whole dataset.
I spend the rest of the day downloading it (turns out, there's a lot of timber around) and then hand-crafting SQL queries to backfill the data into our store as well as all the downstream components.
In the meantime, marketing together with my boss has reviewed my results and is really excited about council planning permission data. They would like to put it into production as soon as possible.
I send some e-mails to the author of the paper to find out how they generated the data and if they would be interested in sharing their software, whilst also trying to figure out how to plumb it into our pipeline so that the projections can make it into their daily reports.
Boss is also unhappy about our current timber data vendor and is wondering if I could try out a dataset provided by another vendor. Easier said than done, as now I have to somehow reproduce the current pipeline on my machine, swap the new dataset in and rerun all our historical predictions to see how they would have fared.
The council planning permission data project is probably not happening. Firstly, it's because the per-postcode sales data that I used in my research is in a completely different database engine that we can't directly import into our prediction pipeline. But in worse news, the author of the paper doesn't really remember how he produced the data and whether his scraping software still works.
After a whole day of searching, I did manage to find a data vendor that seems to be doing similar things, with no pricing data, no nothing. I drop them an e-mail and go home.
Come to work to learn about an overnight production issue that the operators managed to mitigate but now I have to actually fix. Oh well. I get the tag of the container we had running in production and do a docker pull
. I fire it up locally and use a debugger (and the container's Dockerfile) to locate the issue: it's a bug in an open-source library that we're using. I do a quick scan of GitHub issues to see if it's been reported before. Nope. I raise an issue and also submit a pull request that I think should fix it.
In the meantime, the tests I run locally for that library pass with my fix so I change the Dockerfile to build the image from my patched fork. I do a git push
on the Dockerfile, our CI system builds it and pushes the image out to staging. We redirect some real-world traffic to staging and it works. We do a rolling upgrade of the prod service. It works.
I spend the rest of the day reading Reddit.
Github issue still unanswered, but we didn't have any problems overnight anyway.
I have some more exciting things to do: caching. Some guys from Hooli have open-sourced this pretty cool load-balancing and caching proxy that they wrote in Go and it fits our use case perfectly for an internal service that has always had performance issues.
They provide a Docker container for their proxy, so I quickly get the docker-compose.yml
file for our service, add the image to the stack and fiddle around with its configuration (exposed via environment variables) to wire it up to the service workers. I run the whole stack up locally and rerun the integration tests to hit the proxy instead. They pass, so I push the whole thing out to staging. We redirect some requests to hit the staging service in order to compare the performance and correctness.
I spend the rest of the day reading Reddit.
The Github issue has been answered and my PR has been accepted. The developer also found a couple of other bugs that my fix exposed which have also been fixed now. I change our service to build against the latest tag, build on CI, tests pass.
I look at the dashboards to see how my version of the service did overnight: turns out, the caching proxy reduced the request latency by about a half. We agree to push it to prod.
I spend the rest of the week reading Reddit.
There is a lot of tools, workflows and frameworks in software engineering that made developers' lives easier and that paradoxically haven't been applied to the problem of data processing.
In software, you do a git pull
and bring the source code up to date by having a series of diffs delivered to you. This ability to treat new versions as patches on top of old versions has opened up more opportunities like rebasing, pull requests and branching, as well as inspecting history and merge conflict resolution.
None of this exists in the world of data. Updating a local copy of the dataset involves downloading the whole image again, which is crazy. Proposing patches to datasets, having them applied and merging several branches is unspoken of and yet is a common workflow in data science: why can't I maintain a fork of data from a vendor with my own fixes on top and then do an occasional git pull --rebase
to have my fork up to date?
In software, we have learned to use unique identifiers to refer to various artifacts, be it Git commit hashes, Docker image hashes or library version numbers. When someone says "there's a bug in version 3.14 (commit 6ff3e105) of this library", we know exactly which codebase they refer to and how we can get and inspect it.
This doesn't happen with data pipelines: most of the time we hear "the data we downloaded last night was faulty but we overwrote chunks of it and it's propagated downstream so I've no idea what it looks like now". It would be cool to be able to refer to datasets as single, self-contained images and for any ETL job to be just a function between images: if it's given the same input image, then it will produce the same output image.
To expand on that, Docker has made this "image" abstraction even more robust by packaging all of the dependencies of a service together with that service. This means that this container can be run from anywhere: on a developer's machine, on a CI server, or in production. By giving the developers tools that make replicating the production experience easier, we have decreased the distance between development and production.
I used to work in quant trading and one insight I got from that is that getting a cool dataset and finding out that it can predict the returns on some asset is only half of the job. The other half, less talked about and much more tedious, is productionizing your findings: setting up batch jobs to import this dataset and clean it, making sure the operators are familiar with the import process (and can override it if it goes wrong), writing monitoring tools. There's the ongoing overhead of supporting it.
And despite that, there is still a large distance between research and production in data science. Preparing data for research involves cleaning it, importing it into say Pandas, figuring out what every column means, potentially hand-crafting some patches. This is very similar to old-school service set up: do a sudo apt-get install
of the service, spend time setting up its configuration files, spend time installing other libraries and by the end of the day don't remember exactly what you did and how to reproduce it.
Docker made this easier by isolating every service and mandating that all of its dependencies (be it Linux packages, configuration files or any other binaries) are specified explicitly in a Dockerfile. It's a painful process to begin with but it results in something very useful: everyone now knows exactly how an image is made and its configuration can be experimented on. One can swap out a couple of apt-get
statements in a Dockerfile to install, say, an experimental version of libc
and get another version of the same service that they can compare against the current one.
In an ideal world, that's what would happen with data: I would write a Dockerfile that grabs some data from a few upstream repositories, runs an SQL JOIN on tables and produces a new image. Even better, I should be able to have this new image kept up to date and rebase itself on any new changes in the upstream. I should be able to rerun this image against other sources and then feed it into a locally-run version of my pipeline to compare, say, the prediction performance of the different source datasets.
We are slowly coming to a set of standards on how to distribute software which has reduced onboarding friction and allowed people to quickly prototype their ideas. One can do a docker pull
, add an extra service to their software stack and run everything locally to see how it behaves within minutes. One can search for some software on GitHub and git clone
it, knowing that it probably has fairly reproducible build instructions. Most operating systems now have package managers which provide an index of software that can be installed on that system as well as allow the administrator to keep those packages up to date.
There is a ton of open data out there, with a lot of potential hidden value, and most of it is unindexed: there's no way to find out what it is, where it is, who maintains it, how often it's updated and what does each column mean. In addition, all of it is in various ad hoc formats, from CSVs to SQL dumps, from HDF5 files to unscrapeable PDF documents. For each one of these datasets, an importer has to be written. This raises the friction of onboarding new datasets and innovating.
One thing that Git and Docker are popular for is that they're unopinionated: they don't care about what is actually being versioned or run inside of the container. If Git only worked with a certain folder structure or required one to execute system calls to perform checkouts or commits, it would never have taken off. That is, git
doesn't care whether what it's versioning is written in Go, Java, Rust, Python or is just a text file.
Similarly with Docker, if it only worked on artifacts produced by a certain programming language or required every program to be rewritten to use Docker, that would slow down adoption a lot if not outright kill the tool.
Both of these tools build up on an abstraction that has been around for a while and that other tools use: the filesystem. Git enhances tools that use the filesystem (such as the IDE or the compiler) by adding versioning to the source code. Docker enhances the applications that use the filesystem (that is, all of them) by isolating them and presenting each one with its own version of reality.
Such an abstraction also exists in the world of data: it's SQL. A lot of software is built on top of it and a lot of people, including non-technical ones, understand SQL and can write it. And yet most tools around want users to learn their own custom query language.
All these anecdotes and comparisons show that there are a lot of practices that data scientists can borrow from software engineering. They can be combined into three core concepts:
git pull
, not by downloading a new data dump and rerunning one's import scripts.Over the past two years, I and Miles Richardson have been building something in line with this philosophy.
Splitgraph is a data management tool and a sharing platform that is inspired by Docker and Git. It currently is based on PostgreSQL and allows users to create, share and extend SQL schema images. In particular:
commit
, checkout
, push
, pull
etc to produce and inspect new commits to a database. Whilst an image is checked out into a schema, it's no different from a set of ordinary PostgreSQL tables: any other tool that speaks SQL can interact with it, with changes captured and then packaged up into new images.We have already done a couple of talks about Splitgraph: a short one at a local Docker meetup in Cambridge (slides) talking about parallels between Docker and Splitgraph and a longer one at a quantitative hedge fund AHL (slides) discussing the philosophy and the implementation of Splitgraph in-depth. A lot has changed since then but it still is a good introduction to our philosophy.
Interested? Head on to our quickstart guide or check out or the frequently asked questions section to learn more!
Last time, I showed a way to generate a decent route through the quest graph as well as came up with a rough character progression that can be used to quickly complete all faction questlines in Morrowind.
Today, I'll analyse Mark/Recall and other miscellaneous transport modes, deal with an interesting Temple quest, showcase the final route and finally publish the code for the route planner.
Here's a map of the route for those of you who like migraines:
The points of interest here are joined with colour-coded lines. White is walking or flying, red is Mages Guild teleports, blue is Almsivi/Divine Intervention spells, yellow is travel by boat/silt strider and green is Recalls.
Mark/Recall are a pair of spells in Morrowind that allow the player to teleport around the game world. Casting Mark remembers a given place and Recall teleports the player to the last position the Mark was cast. Only one Mark can be active at a given time: casting it again removes the previous Mark.
Imagine casting Mark at the beginning of a dungeon (unlike Skyrim, Morrowind dungeons don't have a quick shortcut back to the start of the dungeon from its end) and teleporting there, or placing a Mark next to an NPC providing transport services. This could shave a considerable amount of time from the route.
There are several questions here. Firstly, given a route, what's the most efficient arrangement of Mark and Recall casts for it? Secondly, can we change the optimiser to take into account the Mark/Recall spells? The most optimal route through the quest graph might not be the most optimal when Mark/Recall spells are used.
For now, imagine we have already settled on a route and can only place a Mark once in the game, in fact only at one node in the quest graph (and not anywhere on the route between nodes). What's the best place for it?
Since we have a matrix of fastest node-to-node travel times, given a Mark position, at each node in the route we can decide whether we want to proceed to the next node directly or by first teleporting to the Mark and then going to the next node. Try placing a Mark at each the nodes in the route and see which one gives the fastest overall time:
def get_best_mark_position(route):
return min(
# can't use the mark until we've placed it
(sum(get_node_distance(r1, r2) for r1, r2 in zip(route[:i], route[1:i]))
+ sum(
# after placing the mark, we have a choice of recalling to it and going to the next node
# or going to the next node directly
min(get_node_distance(r, r2), get_node_distance(r1, r2)) for r1, r2 in zip(route[i:], route[i + 1:])),
i, r) for i, r in enumerate(route)
)
I ran that and found out that by far the best position for a single Mark was right at the questgiver who's standing next to the Mages Guild teleport. This makes a lot of sense: a Recall to the Mages Guild gives the player instant access to 4 cities. Coupled with Intervention spells, this lets the player reach essentially any town in the game within a matter of minutes, if not seconds.
Now, again, given a single route through the quests, let's allow the player to place multiple Marks so that they can Recall to the last one they placed.
I first tried the same idea that I did for the route optimiser: take multiple possible arrangements of Marks (basically a Boolean array of the same length as the route that determines whether, at each node, we place a Mark there or not after we visit it), mutate each one (by randomly adding or removing Marks) and score it (sum up the decreased travel costs by considering at each node whether it's better to proceed to the next node directly or via a previous Mark).
I let this run for a while but it wasn't giving good results, quickly getting stuck in local minima. A big problem with this approach was that it didn't consider placing Marks in places visited between nodes, which excluded strategies like placing a Mark at the beginning of a dungeon (while a door to the dungeon is a point in the travel graph, it isn't a point in the quest graph).
To do that, I'd have to have a matrix of best travel times between each pair of nodes in the travel graph, not just the quest graph. Given how long my implementation of Dijkstra took to create the matrix for 100 nodes, I wasn't going to get away with reusing it.
Floyd-Warshall is the nuclear option of pathfinding algorithms. Instead of finding shortest paths from a single source like Dijkstra would, it finds the shortest paths between any two vertices in the graph. Not only that, but it does in \( \Theta(V^3) \), independently of the number of edges, making it perfect for dense graphs.
It took about 15 minutes to run my Python implementation of Floyd-Warshall on the coalesced 700-node graph. But this wasn't enough. I realised that coalescing vertices in the same in-game cell to a single one was giving strange results, too: for example, each node had an Almsivi/Divine Intervention edge towards the nearest Temple/Imperial Cult Shrine that has a weight considerably larger than zero (due to the fact that the central vertex for that cell was far away from the actual teleportation destination) and I was wondering if that could be skewing the route.
I hence decided to rerun the route planner on the full unprocessed 6500-node graph and rewrote the Floyd-Warshall implementation in C++. It still took 15 minutes to run it, but this time it was on the whole graph. Most of this time, in fact, was spent loading the input and writing the output matrices, since I serialised those into text and not binary.
And by that point I was on a roll anyway and rewrote the route planner in C++ as well. The Python program would now instead export the quest node distance matrix and the dependency graph to a text file. I didn't perform detailed measurements, but it definitely became a couple of orders of magnitude faster.
I tried rerunning the Mark/Recall planner on the fully expanded route (which enumerates each vertex on the travel graph) but by this point, it was getting more and more clear that simply maintaining a Mark at any Mages Guild teleporter was a really good option that was difficult to improve on.
This is a slightly trippy picture, but it's basically a contour plot that shows the average travel time (in real seconds, assuming a travel speed of about 750 units per second, which is achievable with an artifact that I'll talk about later) to any node in the quest graph from any point in the travel graph, interpolated using nearest-neighbour on pixels that didn't map to any points on the travel graph. I also added a travel cost of 5 seconds to public transport and Mages Guild teleporters. This was to account for the time spent in the game's UI as well as to nudge the optimiser into flailing less around multiple towns.
Strictly speaking, I should have actually calculated the average time at each pixel, but this picture is good enough. The colour map here ranges from blue (smallest average travel time) to green (largest). For example, Vivec (south of the game map) has the largest average travel times to any point of interest in the route. This is because the Temple of Vivec (one possible destination of an Almsivi Intervention spell) is on the other side of the city from other transport modes (boats/silt striders/Mages Guild) and so anyone near Vivec would have to first teleport to the Temple and then walk across the city to continue their journey.
On the other hand, despite being basically a wasteland, the southeast corner of the map has good travel connections: this is because a Divine Intervention spell takes the player to Wolverine Hall on the east side, right next door to the Mages Guild.
There's a cool quest in Morrowind's Temple questline that involves the player completing a pilgrimage from the southernmost part of the game map to the northernmost. Sounds easy, right? Well, the only problem is that the player can't speak to anyone during the pilgrimage, which means the player can't use any public transport or Mages Guild teleports.
The honest way to do this is to actually walk or levitate the whole distance, which would take a few minutes even with Speed-increasing spells. The mostly-honest way to do this would be casting Divine/Almsivi Intervention spells in strategic places that would teleport the player part of the way between the spheres of influence of different Temples/Imperial Cult shrines. The dishonest way would be casting a Mark at the shrine during a previous visit and simply Recalling there when the pilgrimage starts.
However, the first version of the route planner wasn't really aware of that quest. I had a "Set Mark at Sanctus Shrine" graph node and a "Do the Sanctus Shrine quest" node, but the optimiser wasn't encouraged to put them close together. In the best route it had come up with, those two nodes were far apart and about 3/4 of the route was with the Mark stuck at the Shrine.
Hence, if we want to maintain a Mark at a Mages Guild, we also have to juggle that with having a Mark at the Sanctus Shrine in order to complete the Silent Pilgrimage. So the question now was kind of an inverse one: given that we can teleport to a Guild at any time (except for when the Mark is at the Shrine and so we'd get teleported there instead), what's the best route through the game quests?
I decided to produce two travel graphs: when there's a recall edge to a Mages Guild (it doesn't matter which one, since we can almost instantaneously teleport to any of them once we're there) and when there's a recall edge to the Sanctus Shrine.
The optimiser would get these two versions of the node-to-node distance matrix as well as the instructions specifying which matrix to use when. That way, it could also try to exploit the Mark in the northern part of the game map.
The best route it could come up with (not counting time spent in dialogue, combat, training or getting all required items/money at the start at the game) now took about 2500 seconds of real time, which looked quite promising.
There's a mode of transport in Morrowind that I hadn't mentioned at all: Propylon Chambers. They're located inside 10 ancient Dark Elf strongholds that are scattered roughly in a circle around the map. Each stronghold has a Propylon Index that's hidden somewhere in the game world, and discovering a given stronghold's Index allows the player to travel to that stronghold from either of the two adjacent to it.
(from http://stuporstar.sarahdimento.com/other-mods/books-of-vvardenfell/key-to-the-dunmer-strongholds/)
Can they be useful here? After looking at their placement, it sadly doesn't seem so. Firstly, there are very few strongholds that are closer to quest objectives than ordinary towns and secondly, their Indices are often in inconvenient places (for example, the Rotheran Index is located in Rotheran itself).
But perhaps it's worth including Propylon Chambers in the route anyway? To test that, I assumed that the player has all Propylon indices from the beginning and regenerated the travel graph with the addition of teleportation between adjacent Dunmer strongholds. This would provide a lower bound on the route length and show whether there is enough time saving to make getting any of the Indices worthwhile.
Turns out, there really isn't. The best route without Propylon Chambers takes about 2500 seconds, whereas including them improves the route by only two minutes. There are a few places the optimiser decided to exploit this method of teleportation:
Given that simulating actually getting the Indices would also be a pain (I'd have to keep track of the optimal travel time between any two quest nodes for when the player has any combination of indices out of \( 2^{10} = 1024 \)), I decided to skip them for now.
There are a few things that are worth doing at the beginning of the game to ensure a smooth progression through the route as well as raise enough money to pay our way through training and some faction quests.
I'd use enchantments for most of the in-game activities, including dealing damage, teleporting and movement. Enchantments are spells that can be put on equipment. They require a soul gem to produce, which determines how much charge the item will have, but enchanted items recharge over time, don't use up player's Magicka reserves and spells cast from them can't fail and are instantaneous. This means that teleporting takes a few seconds faster since we don't need to wait for the cast animation, but more importantly, casts can't be interrupted by someone hitting the player.
The items enchanted with all three types of teleportation (Divine/Almsivi intervention and Recall) are easily obtained at the beginning of the game: the first two during Edwinna Elbert's initial questline and the final one can be bought from a merchant in Caldera. I hence changed the optimiser a bit to always have these two starting nodes (Ajira and Edwinna's Mages Guild quests) at the beginning of the route and would do some more preparation as part of these before proceeding.
I had played around with various ways of dealing damage to NPCs. I first thought of using Blunt weapons, since the player would have to train that skill anyway and one of the best Blunt weapons has to be acquired as a part of an Imperial Cult quest, but it still takes several swings to kill anyone with it, since the player can miss, especially at lower skill levels.
Then I remembered about the Drain Health enchantment: it reduces the target's maximum health by a given number of points. It's supposed to be used as a cheap way to weaken the enemy, but it can also be exploited. If one casts Drain Health 100pt on someone, even for one second, they will die if they have fewer than 100 hit points. Paired with a 100-point Weakness to Magicka effect, this allows for a cheap way to kill anybody with fewer than 200 hit points, which is an overwhelming majority of game characters.
Despite all the teleportation, there still is a lot of walking to be done in the game. While the character will have access to Fortify Speed potions, I only wanted to use them for long movement segments, since making enough of them to cover the whole route would take too much time.
Thankfully, there is an artifact in the game that gives the player a constant Fortify Speed effect: Boots of Blinding Speed. They boost the player's speed by 200 points (essentially tripling it) at the expense of blinding (it's in the name) the player. The blinding effect can be resisted: if the player has a Resist Magicka spell active for the split second when they put the boots on, the effect is nullified.
Moreover, levitation is important, since it allows the player to bypass various obstacles as well as avoid annoying enemies. Due to the way levitation speed is calculated (the main component is the sum of player's Speed and the levitation effect magnitude), 1 point of Levitation is sufficient for the player to start flying and it's cheaper to increase speed by manipulating the character's Speed attribute. 1 point of Levitation for about 90 seconds would be another enchantment.
Chests and doors in Morrowind can have a lock with a level ranging from 1 to 100. Hence, we'd need to also enchant a piece of clothing with Open 100pt.
There are quite a few times in the route where we need to kill someone who's not attacking us without attracting the guards' attention (like when doing Morag Tong assassinations before the actual quest starts). One way to do it is taunting the NPC until they attack, which takes time and needs a moderately high Speechcraft skill. Luckily, there's a magic effect for that, too. Frenzy increases the Fight rating of an NPC and 100pt for 1 second is enough to make them attack the player. When the effect wears off, they don't stop attacking and can be slain in self defence without legal issues.
When a player creates a potion in Morrowind, their chance of success as well as the potion's strength, duration and value is partially governed by the player's Intelligence attribute.
The player can also create a potion that boosts their Intelligence attribute.
Do you see how the game can be broken with this? There's no limit on how many potions the player can consume per second and there's no cap on the player's Intelligence. Hence we can have all our monetary problems taken care of by exploiting this and repeatedly creating stronger and stronger Intelligence potions to sell. Not only that, but we can also use this to create Restore Health potions that restore player's health faster than anybody can damage it as well as use the Intelligence boost to create enchanted items manually (instead of paying a specialist to do it). Finally, we can also create Fortify Speed potions that increase the player's raw speed.
There are merchants in Morrowind that restock some of their ingredients as soon as the player stops trading with them and lots of them sell ingredients for Fortify Intelligence and Restore Health potions.
We need about 75000 gold pieces to get through the game, including all training and faction quests. Luckily, there's a merchant in the game that has 5000 gold in his inventory and buys items at face value. My tests showed I needed about 150 Health potions to get me through the game, so I'd sell any extra ones to the Creeper to get me to the target number.
Fortifying player's Speed (beyond the boost provided by the Boots) is more difficult: there are only two ingredients in the game that restock and provide the Fortify Speed attribute, Kagouti Hide and Shalk Resin. However, they are quite expensive (52 gold pieces in total for the two) and also have a Drain Fatigue side effect (which makes the player lose consciousness when their Fatigue is close to zero). Hence they have to be paired with another two ingredients that have a Restore Fatigue effect.
Here's the final route that I came up with: it opens with the sequence of money-making and enchantments that I had described before and then continues with the list of things to do that was produced by the optimiser. This initial sequence took me about 28 minutes to complete and the rest of the route is located here. I also uploaded the route that assumes the player can use all Propylon Chambers here.
Finally, there are several NPCs that have to be killed as part of the run and have to be damaged first before they can be killed with the Amulet, either with the Drathis' scrolls or with the Iron Warhammer/Skull Crusher when it's picked up:
I think that's it! The code to produce most of this is on my GitHub, together with the code from the previous set of articles. One day I might even actually record myself trying to follow this route, but I'm sure actually planning it out is more fun that running it.
Finally, feel free to follow me on Twitter at twitter.com/mildbyte!
Previously, we left off by converting the problem of finding a route that completes all faction questlines in Morrowind into the general case of the travelling salesman problem with dependency constraints. Today, we'll come up with a way to produce a good enough solution to it.
There are two graphs I'm talking about here: one is the quest dependency graph from the previous part and the other one is the travel graph that I had generated back in an earlier article.
The dependency graph had about 110 geographically distinct nodes at this point, so the first order of business was creating a matrix of fastest routes and travel times between any two of those nodes, since the final route could indeed include travelling between any two points.
To do that, I used Dijkstra's algorithm: since it's an single-source-shortest-path algorithm, if I ran it for one geographical node in the quest dependency graph, I'd get shortest routes (on the travel graph) to all other points. Hence I only had to run it a hundred times.
There was a problem, though: the travel graph had about 6500 vertices and 16000 teleportation edges (that is, travelling with public transport or using an Almsivi/Divine Intervention spell: this doesn't include actual physical travel edges between points in the same cell). It took about 10 minutes to run Dijkstra for a single source, so I was looking at spending about a day generating the travel time matrix.
Hence I decided to prune the travel graph a bit by coalescing vertices that were in the same cell. For every cell (interior or exterior), I'd replace all vertices in it with a single one with average coordinates and then recalculate the cost of travelling between them:
def coalesce_cells(vertices, edges):
# Replaces all vertices in the graph in the same cell with a single one (average location)
vertices_map = defaultdict(list)
for v in vertices:
vertices_map[v.cell].append(v)
# Calculate the average vertex for each cell
average_vertices = {}
for cell, vs in vertices_map.items():
coords = tuple(sum(v.coords[i] for v in vs) / float(len(vs)) for i in range(3))
average_vertices[cell] = Location(coords=coords, cell_id=vs[0].cell_id, cell=vs[0].cell)
new_vertices = set([average_vertices[v.cell] for v in vertices])
# Group edges by average vertices they belong to
grouped_edges = defaultdict(lambda: defaultdict(list))
for v1 in edges:
av1 = average_vertices[v1.cell]
for v2 in edges[v1]:
av2 = average_vertices[v2.cell]
# Calculate the new edge cost
grouped_edges[av1][av2].append((edges[v1][v2][0], get_distance(av1.coords, v1.coords) / WALKING_SPEED + edges[v1][v2][1] + get_distance(v2.coords, av2.coords) / WALKING_SPEED))
new_edges = defaultdict(dict)
for av1 in grouped_edges:
for av2 in grouped_edges[av1]:
# Replace all possible edges between the two new vertices with the cheapest one
new_edges[av1][av2] = min(grouped_edges[av1][av2], key=lambda md: md[1])
return new_vertices, new_edges
With this pruning, the travel graph shrunk to about 800 vertices and 2200 teleportation edges and I successfully managed to create a matrix of fastest travel times between any two nodes on the dependency graph.
Here's one of cool things you can do with such a distance matrix: use a clustering algorithm to visualize clumps in which quest points of interest are organized (the image is clickable).
For example, the top left corner of this heatmap has a group of NPCs that are all located on a set of remote islands at the north of the game map. Getting to them is a pain and takes a lot of time, hence it's worth arranging our quests in such a way so that we only have to visit there once.
Let's now say we have a candidate route, which is one of topological sorts of the dependency graph. We can see how long this route takes by simply adding up the cost of travel between consecutive nodes using our cost matrix.
How would we find an optimal route? Brute force won't help here. I decided to do a slightly less stupid thing: let's take a route and randomly perturb it. Sure, the route we end up with might be less efficient than it was before. But imagine we do that for tens of thousands of randomly generated routes, keeping a fraction of them that's the most efficient, randomly perturbing the best routes again and again. Eventually we'd converge on a decent route, if not the most optimal one.
The final algorithm I used is:
Of course, the actual constants can be played with and the termination condition could be better defined. Some call this a genetic algorithm (where we kind of simulate evolution and random mutations in the gene pool), some call it simulated annealing (where the magnitude of random perturbations decreases over time until the solution pool settles down). "Genetic algorithm" sounds sexier, which is why I mentioned it in this paragraph.
I left this to run overnight and in the morning came back what seemed to be a decent route through the game.
The times here were inferred from in-game travel distances, assuming the minimum walking speed of about 100 game units per second. Of course, there are potions and spells to increase the player's walking speed. In addition, this doesn't account for the time spent in the menus or actually killing whatever the player is supposed to kill.
Overall, there are some things the optimiser came up with that made me go "aha!".
I wrote a pretty printer that would take the graph nodes and expand them into an actual travel plan that uses Almsivi/Divine Intervention spells and public transport. In this fragment, for example, the route planner set up the faction questline progress just right so that all six objectives in the desolate southwest corner of the map could be completed in one go (lines 592-618).
However, there are a few problems with this route:
Advancement in Morrowind factions requires not only quest completion, but also skills training. I had already mentioned that while we can pay to train a skill, it can't be trained above its governing attribute.
Attributes can only be raised when the player levels up. A game character has 5 out of 27 skills as major skills (which lets them level faster and gives a flat +25 bonus to them at the beginning of the game) and 5 minor skills (which also lets them level faster, albeit not as fast as major skills, and adds a +10 bonus). The character levels up when they have gotten 10 points in their major or minor skills.
This is where it gets weird. At level up, the player picks 3 attributes to raise. How much they are raised by is determined by the skills the player had trained. For example, if they got 10 points in Alchemy (governed by Intelligence), then, if Intelligence is picked at level up, it will increase by 5 points instead of 1. However, if the player had leveled up by training 1 point in Long Blade (governed by Strength) and 9 points in Alchemy, they'll only get a 4x multiplier to Intelligence and 1x to Strength.
The player can also train skills that aren't major or minor to get enough points to boost the attribute multiplier. Let's say the player also trains 1 point in Security (governed by Intelligence) which isn't their major or minor skill. It won't count towards the 10 points required for a level up, but it will count towards the attribute multiplier calculations. Hence the player will be able to raise their Intelligence by 5.
I hence had to tactically choose my character's major/minor skills as well as the race (which gives bonuses to certain skills and attributes) in order to be able to quickly meet each faction's expectations.
This is a list of skill levels that each faction requires in order for the player to be able to become the head of that faction. Note that this might not necessarily meet the skill requirements for the highest rank of that faction, since most factions stop checking the player's credentials during their final questlines and just promote the player to the highest rank once the questline is completed.
With that in mind, I decided to have Alchemy, Blunt and Marksman as high level skills. Alchemy (main skill for the Mages Guild) could be trained really quickly by making potions. Blunt was shared between 4 factions (Fighters Guild, Temple, Imperial Cult and Imperial Legion) and would have to be trained to 90. Marksman would cover the other 3 factions (Thieves Guild, Morag Tong and House Hlaalu) and trained to 80.
The other skills had to be chosen partially to cover the remaining, weaker requirements, partially so that training them would boost either Strength or Agility to 90 or 80, respectively (otherwise Blunt or Marksman wouldn't be possible to be trained). I hence decided to go for a character that starts with high Strength and a bonus to Blunt weapons and train Long Blade to boost Strength (and cover the Fighters Guild/Imperial Legion secondary skill requirement).
For Agility, I would train Block, Light Armor and Sneak. All three of those are governed by Agility and training them to required levels would result in Agility being boosted enough to allow me to train Marksman to 80.
Enchant and Mysticism would cover the secondary requirements for the Temple, the Mages Guild and the Imperial Legion.
Here's the final character sheet. The major and minor skills that she starts with are:
I decided not to load up Morrowind trainer data in order to incorporate it into the route planner. Instead, I looked up the best trainers for Blunt and Marksman (since they're the only ones that will let the player reach the required level) as well as some second best ones and tried to come up with people that the player character would meet en route anyway. There were some hilarious coincidences, like Alveleg who has to be killed as part of a Fighters Guild quest but who can also train the player in Block, Sneak and Marksman up to fairly high levels.
I then added some extra nodes to the dependency graph to reflect the new training sessions:
# Training nodes
training_alveleg:
# we're killing him as part of the FG quest and he trains Marksman (45), Sneak (42) and Block (38)
description: Train Block x10 (up to 25), Sneak x15 (up to 30), Marksman x15 (up to 45), should get Agi 60
giver: alveleg
training_bolnor:
description: Train Light Armor x15 (up to 30), Marksman x5 (up to 50), should get Agility 70
giver: bolnor andrani
prerequisites:
- training_alveleg
training_eydis:
description: Train Long Blade x20 (up to 40), Blunt x30 (up to 70), Strength 85
giver: eydis fire-eye
training_ernse:
description: Train Blunt x20 (up to 90)
giver: ernse llervu
prerequisites:
- training_eydis
training_missun:
description: Train Marksman x30 (up to 80)
giver: missun akin
prerequisites:
- training_bolnor
training_falvel:
description: Train Mercantile x10 (should get Personality 35)
giver: falvel arenim
They would then become prerequisites for some later quests in faction questlines:
tt_tharer_1:
description: Get and hand in all Tharer Rotheloth quests
giver: tharer rotheloth
prerequisites:
- tt_7graces_vivec
- tt_7graces_gnisis
- tt_7graces_kummu
- tt_7graces_gg
- tt_cure_lette
- tt_mount_kand
- tt_mawia
- tt_kill_raxle_berne
- training_eydis # Curate (50 blunt) to hand in Galom Daeus quest
In some cases, the requirements I added were stronger than necessary. For example, one could get promoted to Master of Fighters Guild with a Blunt skill of 80, yet it depends on a graph node training Blunt to 90. The reasoning behind it was that we don't want to visit the Master Blunt trainer more than once: if we're visiting her, we might as well train Blunt to the maximum we'll need.
Next up, we'll try to add the usage of Mark and Recall spells to the route as well as discuss some miscellaneous Morrowind tricks and glitches that can help during a speedrun.
Well, not even last night's storm could wake you. I heard them say we've reached Morrowind, I'm sure they'll let us go and do a speedrun.
There's the famous speedrun of Morrowind's main quest that involves basically travelling to the final game location using a few scrolls and spells and killing the boss.
However, there isn't a Morrowind speedrun category where someone tries to become the head of all factions. For all its critical acclaim and its great story, most of quests in Morrowind are basically fetch-item or kill-this-person and there aren't many quests that require anything else. But planning such a speedrun route could still be extremely interesting for many reasons:
So given those features, this can get really complicated. On the way to a given quest objective the player can pick up another quest or pick up an item that might be needed at some point for a quest for a different faction that they aren't even a member of. What could be an efficient route through one faction's quests might be inferior to a slower route when all factions are played through since it could be that points in that route are visited in other factions' quests anyway, and so on.
In other words, planning an efficient route through all factions would be a fun computer science problem.
There are a couple factions where the final quest can be completed immediately, but that just results in a journal entry saying that the player character is now the head of the faction (and the advancement is not reflected in the character stats). I decided I wanted to rise to the top the mostly-honest way instead.
Unlike Skyrim and Oblivion, advancement in Morrowind factions requires the player to have certain skills at a certain level. There are 27 skills in Morrowind and each faction has 6 so-called "favoured skills". Becoming head of a faction requires the player to have one of these skills at a very high level (roughly 80-90 out of 100) and 2 of them at a medium level (about 30-35).
Morrowind characters also have 7 attributes, each of which "governs" several skills. Attributes also play a role in faction advancement.
So that's kind of bad news, since in a speedrun we won't have enough time to develop our character's skills. The good news is there are trainers scattered around Morrowind that will, for a certain fee, instantly raise these skills. The bad news is that these trainers won't train skills above their governing attributes. Raising attributes requires levelling and levelling in Morrowind is a very long story. I'll get into the actual levelling strategy later.
I quickly gave up on scraping quest data from the game files (since most quests are driven and updated by a set of dialogue conditions and in-game scripts) and instead used the UESP's Morrowind Quests page to manually create a series of spreadsheets for each faction that included quests, their reputation gain and rough requirements.
Here's an example of one such spreadsheet:
This spreadsheet already shows the complexity of Morrowind factions. There are two intended ways to reach the top of the Mages Guild: by having enough reputation and skills to become a Master Wizard and either completing all of Edwinna Elbert's quests and challenging the current Arch-Mage to a duel or completing all of Skink-in-Tree's-Shade's quests and getting a letter from the upper management telling the current Arch-Mage to step down. I later found another way, by reaching the rank of Wizard (one rank below Master Wizard) and then talking to the current Arch-Mage about a duel, which is quicker.
Other than that, there's also multiple ways to complete a quest. Edwinna Elbert's final 3 quests requiring the player to bring her some Dwarven artifacts don't require the player to actually go to the places she recommends: the artifacts can be acquired from different locations or even bought.
...turned out to be tricky. The first cut of this was encoding each quest in a YAML file as a set of prerequisites and required items/actions for completion. For example:
edwinna_2:
giver: edwinna elbert
prerequisites:
rank: Conjurer
quest: Chimarvamidium 2
quests:
- Dwemer Tube:
rep: 5
completion:
items: misc_dwrv_artifact60
- Nchuleftingth:
rep: 10
completion:
go_person: anes vendu
...
This encodes the start of Edwinna Elbert's advanced questline, Dwemer Tube from Arkngthunch-Sturdumz, which requires the player to have become a Conjurer in the Guild and completed Edwinna's previous quest. To complete this quest, the player needs to have the tube in their inventory (I used the in-game item ID). Completion gives the player 5 faction reputation points.
The questline continues with Nchuleftingth Expedition and to complete that quest, the player needs to go to a certain NPC (he's an archaeologist who has, as it turns out, perished). Unlike the previous quest, this action (of going to a person and interacting with them) requires us to have started the quest.
So with that in mind, we can generate a set of all possible ways to complete a guild using breadth-first search:
What could possibly go wrong? Well, firstly there's an issue of ordering. If the player is juggling two parallel questlines from different questgivers, each possible interleaving of those is counted, which causes a combinatorial explosion. Secondly, routes that are strictly worse than existing routes are generated too. For example, if completing a certain guild requires us to only complete quests A, B, D and E, there's no point in generating a route A, B, C, D, E: there's no way doing D won't take extra time.
I hence did some culling by making sure that during generation we wouldn't consider a sequence if it were a superset of an already existing quest sequence. This brought the number of generated routes (subsets, really) down to a mildly manageable 300.
Is this good? Well, not really. This only accounted for which sets of quests could be completed. There was no mention of the order in which these quests could be completed (yielding probably millions of permutations), the ordering of actual actions that would complete a given quest (for example, completing a given quest could involve killing someone and that could happen even before the player character was aware of a quest) or the alternative routes (like fetching a required item from a different place or doing an extra objective to get more faction reputation).
Worse even, this was just the route generation for one faction. There were 7 more factions to do (and I had to pick a Great House that would be the quickest to complete too) and even if they didn't have that many ways to complete them, brute-forcing through all the possible routes with all factions would definitely be unreasonable.
This method also wouldn't let me encode some guild features. For example, Morag Tong, Morrowind's legal assassin guild, has several questgivers around the world, any of which can give the player their next contract. Furthermore, the reputation required for the final questline to open can be gathered not only by doing assassination contracts, but also by collecting certain items spread around the world, each yielding about the same reputation as a contract. These items can quite often be found in dungeons that the player has to visit for other factions anyway and it could be the case that doing those quests to collect these items is overall faster.
I hence decided to drop the idea of combining all possible routes from all guilds and instead did some experimentation to find out if there are obviously quick routes through most guilds. Turns out, there were and so instead of solving a few million instances of the Travelling Salesman Problem, I could do with just one. Still impossible, but less impossible.
In the Mages Guild, the introductory questline can be completed in a matter of minutes and yield 22 reputation points and then Edwinna's quests can be completed en route to other quest locations that will likely have to be visited anyway. Those two questgivers would bring the player character over the 70 reputation limit required to challenge the current Arch-Mage (at that point, I wasn't looking at skills training yet).
The Fighters Guild could be completed by doing all quests from one questgiver (most of which involved killing bandits in roughly the same area which can be done even before the quest begins), a couple from another one and then proceeding on to a final questline (which does have a quest requiring to bring some items to the middle of nowhere, but the alternative ending requires many more reputation points).
The Thieves Guild has some conflicts with the Fighters Guild and so the two questlines have to be carefully managed together. Almost all quests in the Thieves Guild need to be done (since doing some Fighters' Guild quests decreases reputation with the Thieves Guild), but the good news is that they share the antagonist and so after reaching a certain reputation with the Thieves Guild, finishing the Fighters Guild promotes the character to Master Thief.
Morag Tong can basically be completed in one go: after the initial contract required to join the Guild, the player collects enough Sanguine items to skip all contracts straight on to the final questline and the location of the final boss is visited twice in other guilds' quests.
Tribunal Temple starts with a mandatory pilgrimage that visits a few locations around the game map. There are several more pilgrimages as part of the questline and some of those can be completed even without having joined the faction.
Imperial Legion has a questline that takes place in a single town and requires the player to visit the location that's visited anyway in Edwinna Elbert's questline in the Mages Guild. In addition, one quest gives additional reputation with the Temple, allowing to skip one quest there.
Imperial Cult has three questlines. One of them involves fundraising and, just like in real life, the player can simply give the money to the questgiver on the spot instead of asking others for it. The other one involves fetching several powerful artifacts and visiting a couple of locations that are visited in other guilds' questlines.
After eyeballing the Great Houses' questlines, I settled on House Hlaalu. House Redoran has a way too long questline, most of the action in House Telvanni happens on the East side of the game map that mostly isn't visited in other quests and the final Hlaalu questline that leads to becoming Grandmaster can be started at an earlier rank.
Now that I had a single route for each guild, instead of encoding each and every quest requirement and location in a graph, I opted for an easier way. Each node in a quest dependency graph would be something that's fairly quick to complete and happens in the same location. It could be a quest, or a series of quests, or the action of clearing out some dungeon that is featured in several future quests.
A node contains two things: where this node is located (for example, the in-game ID of the questgiver or an NPC in the location that the player needs to clear out or find a certain item) and nodes that the player needs to have completed before.
For example:
# Coalesced Ajira questline
mg_ajira_1:
giver: ajira
# Edwinna's quests up until Nchuleftingth expedition, all done in one go (Dwemer tube stolen
# from Vorar Helas in Balmora, then Chimarvamidium, Skink and Huleen)
mg_edwinna_1: # also gives Almsivi/Divine amulets
giver: edwinna elbert
prerequisites:
- mg_ajira_1
mg_edwinna_2:
giver: edwinna elbert
prerequisites:
- mg_edwinna_1
- mg_edwinna_nchuleftingth
- mg_edwinna_scarab_plans
- mg_edwinna_airship_plans
# locations of items we need to collect to complete Edwinna's quests
mg_edwinna_nchuleftingth:
giver: anes vendu # can discover his body before the quest begins
mg_edwinna_scarab_plans:
giver: Khargol gro-Boguk # orc in Vacant Tower with the other copy of the plans
mg_edwinna_airship_plans:
giver: lugrub gro-ogdum # located near the orc in Gnisis Eggmine that is also a part of the IL quest
mg_master:
giver: trebonius artorius
prerequisites:
- mg_edwinna_2
In this case, the Dwarwen plans required by Edwinna can be collected even before the questline begins and then all handed in at the same time.
When talking to someone had to be done as a part of the quest, I encoded it as several nodes that depended on each other:
fg_eydis_1_start: # join FG and start first quest
giver: eydis fire-eye
fg_eydis_1_do:
giver: drarayne thelas # actually do the first quest
prerequisites:
- fg_eydis_1_start
fg_eydis_1_end: # hand the quest in
giver: eydis fire-eye
prerequisites:
- fg_eydis_1_do
Here's the final quest dependency graph:
This was much better than messing around with reputation points and quest prerequisites. Any topological sorting of this dependency graph would be a valid route through the game's quests (assuming I encoded my dependencies correctly). Since each node had a fixed geographical location, I could use a pathfinding algorithm and the data from my previous project to find out the time that any given route satisfying this dependency graph (using teleportation and public transport) takes.
However, there's still a problem: there are many possible topological sortings of a given graph and counting them is #P-complete.
This is a general case of the travelling salesman problem: if here we need to find the shortest tour that visits all nodes subject to a set of dependencies (e.g. we can't visit A before we've visited C), then in TSP we need to visit all nodes without any dependencies. Having dependencies decreases our search space (in the most extreme case the dependency graph is a line and so there's only one possible route), but not by enough.
I hence had to develop some approximations to turn this graph and the matrix of travel times between its nodes into a good-enough route.
Next up, I'll try a couple of random approximations to solve this problem, including simulated annealing (also kind of known as a genetic algorithm). There's also the matter of planning out the player character and his/her skill development in order to minimize the amount of time we need to spend training up to get promoted in various guilds. Stay tuned!
This is a quick template for a cold e-mail that can be used for initial reconnaissance and information gathering for a software engineer/entrepreneur that would like to break into a new industry and doesn't know where to start. Feel free to use it and alter it as you see fit!
Dear (name of CEO),
My name is (name) and I'm an entrepreneur based in (city) with (n) years of experience crafting software products for (Google/Facebook/Apple/Amazon/Microsoft).
As someone with the gift of an analytical mind that leaves nothing unexamined, I believe I am uniquely positioned to quickly get up to speed with the domain knowledge intrinsic to a given field that would take normal people years to acquire. I have hence been interested in creating a business that solves some of the unique problems that (industry) faces. I was wondering if you had a few minutes to answer some questions so that I can find out what these problems actually are?
Firstly, I'm interested in the business processes in your day-to-day work that have the potential to be replaced with a CRUD application. Do you use an Excel spreadsheet for any of your operating procedures or business intelligence? Do you think there is scope for migrating some of those in-house systems to a software-as-a-service platform so that you can focus on your business' competitive advantages?
Secondly, I am a big proponent of distributed ledger technology as an alternative to single-point-of-failure classical databases in this increasingly more trustless society. Would you consider replacing part of your data storage, inventory tracking or other business operations with a custom-tailored blockchain-based solution that leverages smart contracts to ensure a more robust experience for all stakeholders?
Finally, do you think running your organisation could be made easier with a bespoke shared economy service that empowers workers to flexible time and lowers your human resources overhead while allowing you to tap into an immensely larger pool of workforce? This is an innovation that has successfully added value to taxis, hotels and food delivery and I firmly believe that there is a unique proposition in applying it to (industry).
I look forward to hearing from you.
(my name)
Sent from my (device)
Yes, this is satire. But if you remove the over-the-top buzzword soup, the messiah complex and flavour-of-the-month technology, it really doesn't seem like there's much a person without any connections or experience in an industry can do besides cold-emailing people and asking them "what do you use Excel for?"
source: XKCD
Abstract: I examine inflation-adjusted historical returns of the S&P 500 since the 1870s with and without dividends and backtest a couple of famous investment strategies for retirement. I also deploy some personal/philosophical arguments against the whole live-frugally-and-then-live-frugally-off-of-investments idea.
Disclaimer: I'm not (any more) a finance industry professional and I'm not trying to sell you investment advice. Please consult with somebody qualified or somebody who has your best interests at heart before taking any action based on what some guy said on the Internet.
The code I used to produce most of the following plots and process data is in an IPython notebook on my GitHub.
Early retirement is simple, right? Just live frugally, stop drinking Starbucks lattes, save a large fraction of your paycheck, invest it into a mixture of stocks and bonds and you, too, will be on the road towards a life of work-free luxury and idleness driven by compound interest!
What if there's a stock market crash just before I retire, you ask? The personal finance gurus will calm you down by saying that it's fine and the magic of altering your bond position based on your age as well as dollar cost averaging, together with the fact that the stock market returns 7% on average, will save you.
As sweet as that would be, there is something off about this sort of advice. Are you saying that I really can consistently make life-changing amounts of money without doing any work? This advice also handwaves around the downside risks of investing into the stock market, including the volatility of returns.
I wanted to simulate the investment strategies proposed by personal finance and early retirement folks and actually quantify whether putting my "nest egg" into the stock market is worth it.
This piece of writing was mostly inspired by NY Times' "In Investing, It's When You Start And When You Finish" that shows a harrowing heatmap of inflation-adjusted returns based on the time an investment was made and withdrawn, originally created by Crestmont Research. They maintain this heatmap for every year here.
This post is in two parts: in the first one, I will backtest a few strategies in order to determine what sort of returns and risks at what timescales one should expect. In the second one, I will try to explain why I personally don't feel like investing my money into the public stock market is a good idea.
The data I used here is the S&P 500 index. and I'm assuming one can invest into the index directly. This is not strictly true, but index tracker funds (like Vanguard's VOO ETF) nowadays are pretty good and pretty cheap.
A friend pointed me to a paper that has an interesting argument: using the US equity markets for financial research has an implicit survivorship bias in it. Someone in the 1870s had multiple countries' markets to choose from and had no way of knowing that it was the US the would later become a global superpower, large amounts of equity gains owing to that.
As a first step, I downloaded Robert Shiller's data that he used in his book, "Irrational Exuberance", and then used it to create an inflation-adjusted total return index: essentially, the evolution of the purchasing power of our portfolio that also assumes we reinvest dividends we receive from the companies in the index. Since the companies in the index are large-cap "blue chip" stocks, dividends form a large part of the return.
I compared the series I got, before adjusting for inflation, with the total return index from Yahoo! Finance and they seem to closely match the Yahoo! data from the 1990s onwards.
The effect of dividends being reinvested changes the returns dramatically. Here's a comparison of the series I got with the one without dividends (and one without inflation):
The average annual return, after inflation, and assuming dividends are reinvested, is about 7.5%. Interestingly, this seems to contradict Crestmont Research's charts where the average annual pre-tax (my charts assume there are no taxes on capital gains or dividends) return starting from 1900 is about 5%.
On the other hand, the return with dividends reinvested does seem to make sense: without dividends, the annualized return is about 4.25%, which implies a dividend yield close to the actual observed values of about 4.4%.
Another observation is that the returns are not normal (their distribution has statistically significant differences in kurtosis and skewness).
First, I wanted to plot the annualized returns from investing in the S&P 500 at a given date and holding the investment for a certain number of years.
The result kind of confirms the common wisdom that the stock market is a long term investment and is fairly volatile in the short term. If one invested in the late 1990's, say, and withdrew their investment in 5 or even 10 years, they would have lost about 4% every year after inflation. Only with investment horizons of 20 years do the returns finally stabilise.
Here are the distributions of returns from investing a lump sum into the S&P 500. This time they were taken from 10000 simulations of paths a given portfolio would follow (by randomly sampling monthly returns from the empirical returns' distribution). I also plotted the inflation-adjusted returns from holding cash (since it depreciates with inflation) for comparison.
What can be seen from this is that in 20 or 30 years, it seems possible to double or quadruple one's purchasing power.
I also plotted a set of "hazard curves" from those sampled distributions. Those are the simulated probabilities of getting less than a given return depending on the investment horizon. For example, there's a 30% chance of getting a return of less than 0% after inflation (losing money) for a 1 year investment and this chance drops down to about 5% for a 20 year investment. Conversely, there's a 100% chance of getting a return of less than 300% (quadrupling the investment), but after 20 years there's a 50% chance of doing so.
DCA is basically investing the same amount of money at given intervals of time. Naturally, such an investment technique results in one purchasing more of a stock when it's "cheap" and less when it's "expensive", but "cheap" and "expensive" are relative and kind of meaningless in this case.
DCA is usually considered an alternative to lump sum investing, but for a person who is investing, say, a fraction of their paycheck every month it's basically the default option.
I did some similar simulations of dollar cost averaging over a given horizon against investing the same sum instantly.
Unsurprisingly, DCA returns less than lump sum investment in this case. This is because the uninvested cash depreciates with inflation, as well as because the average return of the stock market is positive and hence most of the cash that's invested later misses out on those gains.
DCA would do better in a declining market (since, conversely, most cash would miss out on stock market losses), but if one can consistently predict whether the market is going to rise or decline, they can probably use that skill for more than just dollar cost averaging.
In my tests, dollar cost averaging over 20 years gave a very similar distribution to investing a lump sum for 9 years. Essentially, returns of DCA follow those of investing a lump sum for a shorter period of time.
Finally, here are the "hazard curves" for dollar cost averaging.
After a year of monthly investing we'd have an almost 40% chance of losing money after inflation and even after 20 years we still have a 10% chance. After 20 years, doubling our purchasing power is still a coin toss.
What we have gleaned from this is that the long-term yearly return from the stock market is about 7% after inflation (but before taxes), assuming one invests over 20-30 years. Compounded, that maybe results in quadrupling of one's purchasing power (less if one invests a fraction monthly, and even less with dividend and capital gains taxes).
While doubling or trebling one's purchasing power definitely sounds impressive (especially whilst doing only a few hours of work a year!), it doesn't really feel big if you look into it. Let's say you're about to retire and have saved up $1 million (adjusted for inflation) that you weren't investing. If you were putting a fraction of that money monthly into the stock market instead you would have had $2-3 million, say. But you can live as comfortably off of the interest on $1 million as you would on $2-3 million.
And on the contrary, if you have only saved $100k, dollar-cost averaging would have yielded you $300k instead, the interest on both amounts (and in retirement it has to be risk-free or almost-risk free) being fairly small.
One could argue that every little bit helps, but what I'm saying here is that the utility of wealth is non-linear. It's kind of sigmoidal, in a way. I like this graph from A Smart Bear:
source: A Smart Bear
As long as one has enough money to get up that "utility cliff" beyond which they can live off of their investments in retirement, it doesn't matter how much is it. Conversely, saving by investing into the stock market is worth it only if one is very sure that that's the thing that will get them over the line.
This possible climb up the cliff comes at a cost of locking in one's capital for a large amount of time (as short-term volatility in the stock market makes it infeasible to invest only for a few years). This lock-in leaves one completely inflexible in terms of pursuing other opportunities.
One's basically treating the stock market as some sort of a very long-term bond where they put the money in at one end and it comes out on the other side, slightly inflated. There was also an implicit assumption in all these simulations that the future follows the same pattern as the past.
Returns only become consistent after 30 years with dollar cost averaging. Someone who started investing a fraction of their savings into the general stock market 30 years ago would have gotten a 5-7% annualized return. Could they have predicted this? Probably. But could they have predicted the rise of the Web, the smartphones, the dotcom boom and crash? Probably not.
I'm not saying that whatever extra money one has should be invested into Bitcoin, a friend's startup or something else. Money can also be used to buy time off (to get better acquainted with some new development or technology) or education. Is using it to buy chunks of large corporations really the best we can do?
I also like the idea of "barbell investing", coined by Nassim Nicholas Taleb: someone should have two classes of investments. The first one is low-risk and low-return, like bonds or even cash. The second one is "moonshots", aggressive high-risk, low-downside, high-upside investments. Things like the stock market are considered to be neither here nor there, mediocre-return investments that might bear some large hidden downside risks.
There's an argument that one should still save up excessive amounts of money (as investments or just cash) whilst living extremely frugally so that after several years of hard work they "never have to work again" and can retire, doing what they really would have loved to do this whole time.
One of Paul Graham's essays kind of sums up what I think about it:
Conversely, the extreme version of the two-job route is dangerous because it teaches you so little about what you like. If you work hard at being a bond trader for ten years, thinking that you'll quit and write novels when you have enough money, what happens when you quit and then discover that you don't actually like writing novels?
Paul Graham, "How To Do What You Love"
Here I am, young and retired. What do I do next? Anything? How do I know what I like? Do I have any contacts that I can rely upon to help me do that "anything"? I don't feel that toiling away somewhere, being bored for a couple of decades, so that I can then quit and be bored anyway (since I hadn't learned what makes me tick), is a useful life strategy.
What about all those people who did become rich investing into the stock market? Warren Buffett is one of them, probably one of the most famous investors in the world. But he made his first couple of million (in 2018 dollars) working for Benjamin Graham, in salary and bonuses. In essence, if he wanted to, he could have retired right there and then.
Only then did he proceed to increase his wealth by investing (first running an investment partnership and so working with the partnership's money and presumably charging a performance fee, then through Berkshire Hathaway). None of these methods are available to someone with a personal investment account.
Essentially, I think that the stock market is a wealth preservation, not a wealth generation engine. Publicly-listed companies are large and stable, paying consistent and healthy dividends with the whole investment yielding a solid, inflation-beating return.
But for me? I think it's important to stay flexible and keep my options open. If someone offers me an opportunity somewhere, I don't want to say "Sorry, I have a mortgage and the rest of my money is invested in healthy companies with solid, inflation-beating returns that I can't currently sell because it's in drawdown and would you look at the tax advantages". I want to be able to say "Sure, let me give a month's notice to my landlord. Where are we going?"
All of humanity's problems stem from man's inability to sit quietly in a room alone.
—Blaise Pascal
All too often, in online and offline discourse, when I (or I see someone else) voice a concern about some phenomenon, the argument gets shot down with something like "Your problems are first-world problems, there are people who have it (or historically had it) much worse than you" or "Well, it could always be worse. What if you didn't have (a job/a car/food/money/a romantic partner)?"
In a way, it feels like a special case of whataboutism ("Yes, X did a bad thing, but Y also did a bad thing, so how about we discuss that instead"). To myself, I used to call it "the African children fallacy" and sure, it's kind of insensitive, but I thought that it nicely references a well-known form of it ("how dare you complain about this when there are children starving in Africa?").
Recently, I started digging into it further and learned that it's called "the fallacy of relative privation" or the "not as bad as" fallacy (RationalWiki). In this essay, I want to investigate why I don't like it being used, as well as possible reasons for it getting brought up.
In a recent Hacker News discussion on "The Workplace Is Killing People and Nobody Cares", a Stanford Business School article on the harms brought by the modern work culture, this argument was deployed fairly widely: no matter what its issues are, the modern office environment (with comfortable chairs, air conditioning and mostly interesting work) is better than the life of a medieval farmer or an industrial factory worker, so we should appreciate it.
When I published one of my earlier essays, one of the points in which was that everybody commuting to work on a 9 to 5 schedule created undue strain on all sorts of infrastructure, I got a few similar responses, too ("well, try working in a Starbucks instead of a 9-5 job and see how you like it" or words to that extent).
Thing is, all these points are valid. I wouldn't want to swap my lifestyle with that of a medieval farmer, despite that by some metrics their life might have been better than mine, or live without electricity or potable water, or even work at a coffee shop.
But that doesn't imply that I want my life to stay exactly how it is. No matter whether there are people out there whose lives are better off or worse off than mine, I always want to improve my circumstances somehow and I think it's worth contemplating how things could be made better, all the time.
In the case of work, work cultures and workplace environments, as much as I do agree office workers have it pretty good, I don't think people should treat the ability to sell most of one's waking hours to someone else as the best humanity can do. It's in fact kind of elitist to suggest that our way of life is the best one and pity those who aren't striving towards it.
In its strong form, the "not as bad as" fallacy implies that nobody can improve their lives until they have made sure everybody else is going to be better off. This kind of serves as a counterpoint to Pareto improvements, where at least one individual ends up better off without making anybody worse off.
I think, partially, using it stems from the will of the speaker to rationalise what's happening to them and why they don't want to change their own situation and examine their own circumstances. It's easy to continue doing what you're doing and not taking any risks if you've seen (or imagined) how bad it can get.
As a more extreme form of this argument, it might even be an implicit desire to not see anyone in a group become better than the group, kind of an extension of a crab mentality. A villager could be told that, sure, life in the village is tough, but the neighbouring villages have it worse, so why leave? Especially if he does make it big somewhere else, comes back and makes us all look like fools.
But, more dangerously, it can also be used as a manipulation tactic by someone who affects someone else's life and wants them to come to terms with that. Consider a boss that doesn't want to give you a raise ("well, Jimmy has worked here for a decade and never asked me for one!"). Even darker, imagine a victim of domestic abuse getting told that the problems they are facing are first-world problems and at least they still have a roof over their head. Or indeed the victim telling this to themselves as a way of self-gaslighting.
Taken to its extreme, this argument invalidates any sort of technological advancement that's attempted before every country on Earth has exactly the same quality of life. Should space exploration be (or have been) postponed until all nations have achieved Western quality of life? Or do we expect innovation in one country, no matter which side of the globe it's on, to be eventually spread around the world?
I think Stoicism is a great philosophy and a way of life and I've been trying to use it in my life too. One of Stoicism's core teachings is that the best way to be happy is wanting things that one already has and valuing them. Negative visualisation is one of the tools for that: imagining how things could be worse, partially to appreciate them more, partially to plan for the case they do become worse. When used like that, Stoicism leads one to the revelation that they could be happy here and now, without relying on anything outside of their control.
Hence, the "not as bad as" argument could also be used as a way of negative visualisation.
But a large amounts of Stoics whose writings have reached us were rich and famous. Seneca was a playwright and a statesman. Marcus Aurelius was an emperor. I have long tried to reconcile the fact that Stoicism seems to stop us from wanting anything with the fact that a large part of Stoics were of high statures.
Given that for Stoic writings to reach us, they had to have been famous in some way already, it's possible that they started using this philosophy as a way to keep the positions that they had achieved and stay where they were. However, it also could be argued that their beliefs empowered them to do what they felt was right without seeking external validation. That the recognition of their work in terms of money, fame or prestige happened as a side effect, something they didn't care about.
One of my favourite pieces of writing I reread quite often is David Heinemeier Hansson's "The Day I Became A Millionaire". Here's what I think is the best quote from it:
Barring any grand calamity, I could afford to fall off the puffy pink cloud of cash, and I’d land where I started. Back in that small 450 sq feet apartment in Copenhagen. My interests and curiosity intact. My passions as fit as ever. I traveled across a broad swath of the first world spectrum of wealth, and both ends were not only livable, but enjoyable. That was a revelation.
Note how DHH caveats this with "first world spectrum of wealth": he also credits the privileges we have, in his case, the Danish social security system, with his success.
I view Stoicism and ability to appreciate what I already have as a springboard to continuous (and continued) improvement of things within my control. It's the ability to take risks knowing that wherever you land, your life will still be pretty good. So in that respect, the "not as bad as" argument turns into "won't ever be as bad as", changing apathy into an empowering limited-downside proposition.
While appreciating privileges that we have is a good tactic for personal happiness, I also believe that the best way to respect those privileges is to use them and do things that one wouldn't have been able to without it. Otherwise, we're essentially squandering them.
And it's not like one's success helps just that person. Joanne Rowling wrote the first few chapters of Harry Potter whilst on benefits, another first world privilege. A couple of decades later, these series of books have sold in excess of 500 million of copies worldwide and spanned a film franchise that has grossed a few billion dollars. Notwithstanding the joy that the Harry Potter series has brought to the people all across the world, the tax revenue from that might well make the UK's welfare system one of the best-performing VC funds in the world.
Sure, all of humanity's problems might stem from a person's inability to sit quietly in a room alone, but so does all the progress.
Once upon a time I bought a car. I drove it straight out of the dealership and parked it on my driveway. This is a thought experiment, as youth in the UK can't afford neither a car nor a driveway and I can't drive, but bear with me.
I bought a car and I drove it around for a few months or so and life was great. It did everything I wanted it to do, it never stalled and never broke down. One day, I got an e-mail from the dealership saying that they'd need to update my car. Sure, I thought, and drove there to get it "updated". They didn't let me look at what they were doing with the car, saying that they were performing some simple maintenance. Finally, I got my car back, without seeing any visible changes, and drove it back home.
This happened a few more times over the next several months. I'd get an e-mail from the dealer, drive the car back to them, have a cup of coffee or three while they were working, and then get the car back. Sometimes I'd get it back damaged somehow, with a massive dent on the side, and the dealership staff would just shrug and say that those were the manufacturer's instructions. Then a few days later, I'd get summoned again and after another update cycle the dent would be gone.
At some point, I stopped bothering and after a few missed calls from the dealership my car stopped working completely. I phoned the dealership again and sure, they said that the car wouldn't work until another update. Great. I had the car towed to the dealership and drove back without any issues.
One night I got woken up by a weird noise outside my house. I looked out the window and saw that some people in dark overalls were around my car, doing something to it. I ran out, threatening to call the police. They laughed and produced IDs from the dealership, telling me that since people were frustrated with having to update their cars so often, they'd do it for them in order to bother them less.
I sighed and went back to sleep. This continued for quite some time, with the mechanics working on my car every week or so. I'd invite them for tea and they would refuse, quoting terms of service and all that sort of thing.
One day I woke up to a car covered in ads that seemed to be related to what I was browsing last night. There wasn't anything controversial, thankfully, but it was still a bit unsettling. The dealership support staff said it was to offer me products relevant to my interests. I asked if I could take them down and got told that it was a part of the whole offering and the vehicle wouldn't work without them. So I had to drive to work surrounded by more pictures of the Trivago woman that I was comfortable with.
After a year or so, the manufacturer had innovated some more. When I got into the car and turned the ignition key, a bunch of mechanics appeared seemingly out of nowhere, and picked the car up, ready to drag it as per my instructions. Turns out, the night before they had removed the engine and all the useful parts from the vehicle, turning it into a "thin client". It was supposed to make sure that when there was an issue with the car, they could debug it centrally and not bother all their customers.
Finally, one morning I got into the car and nothing happened. Turns out, the manufacturer was acquired by a larger company last night and their service got shut down. I sat at the wheel, dreading being late for work again, and suddenly woke up.
Once upon a time I opened a new tab in Firefox on my Android phone to find out that besides a list of my most visited pages to choose from, there also was a list of things "suggested by Pocket". What the hell was Pocket, why was it suggesting me things and, more importantly, how the hell did it get into my Firefox?
I remember when pushing code to the user's device was a big deal. You'd go to the vendor's website, download an executable, launch it, go through an InstallShield (or a NullSoft) install wizard if you were lucky and only then you would get to enjoy the new software. You'd go through the changelogs and get excited about the bugfixes or new features. And if you weren't excited, you wouldn't install the new version at all.
I remember when I went through my Linux phase and was really loving package managers. It was a breath of fresh air back then, a single apt-get upgrade
updating all my software at once. And then Android and Apple smartphones came around and they had exactly the same idea of a centralized software repository. How cool was that?
I'm not sure when mobile devices started installing updates by default. I think around 2014 my Android phone would still meekly ask to update all apps and that was an opportunity for me to reestablish my power over my device and go through all the things I wasn't using anymore and delete them. In 2016, when I got a new phone, the default setting changed and I would just wake up to my device stating, "Tinder has been updated. Deal with it".
And it's been the case on the desktop too. I realised that Firefox and Chrome hadn't been pestering me about updates for quite a while now and, sure, they've been updating themselves. I like Mr Robot, but I like it in my media player, not in my browser.
It's not even about automatic updates. I could disable them, sure, but I would quickly fall behind the curve, with services that I'm accessing stopping supporting my client software. In fact, it's not even about browsers. I'm pretty sure I wouldn't be able to use Skype 4 (the last decent version), that is, if I could find where to download it. As another example, I recently launched OpenIV, a GTA modding tool, at which point it told me "There's a new version available. I won't start until you update". Uuh, excuse me? Sure, I could find a way around this, but still, it's not pleasant being told that what's essentially a glorified WinRAR that was fine the night before can't run without me updating it.
(as an aside, WinRAR now seems to be monetized by having ads on its "please buy WinRAR" dialog window.)
If I go to a Web page, gone are the days of the server sending me some HTML and my browser rendering it the way I wanted. No, now the browser gets told "here's some JavaScript, go run it. Oh, here's a CDN, go download some JavaScript from there and run it, too. Oh, here a DoubleClick ad server, go get some pictures that look like Download buttons from over there and put them over here. Also, here's the CSRF token. If you don't quote at it at me the next time, I'll tell you to go away. Oh yeah, also set this cookie. Oh, and append this hexadecimal string to the URL so that we can track who shared this link and where. The HTML? Here's your HTML. But the JavaScript is supposed to change the DOM around, so go run it. It changes your scrolling behaviour, too, so you get a cool parallax effect as you move around the page. You'll love it."
As a developer, I love web applications. If a customer is experiencing an issue, I don't need to get them to send me logs or establish which version of the software they're running. I just go to my own server logs, see what request they're making and now I am able to reproduce the issue in my own environment. And with updates now getting pushed out to users' devices automatically, there are fewer and fewer support issues which can be resolved by saying "update to the newest version" and instead I can spend time on better pursuits. Finally, I don't need to battle with WinAPI or Qt or Swing or any other GUI framework: given enough CSS and a frontend JavaScript framework du jour, things can look roughly the same on all devices.
However, that leaves users at the mercy of the vendor. What used to be code running on their own hardware is now code running on the vendor's hardware or code that the vendor tells their hardware to run. So when they end up in a place with no Internet connection or the vendor goes out of business, the service won't work at all instead of working poorly.
By the way, here's an idea on how to come up with new business ideas. Look at the most popular Google services and have a quick think about how you would write a replacement for them. When they inevitably get sunsetted, do so and reap kudos. For example, I'm currently writing a browser addon that replaces the "View Image" button from Google Image search results.
In fact, it's not just an issue with applications. Once upon a time in early 2017, I came home from work to find my laptop, that had been on standby, decided to turn itself on and install a Windows 10 update. The way I found that out was because the login screen had changed and it was using a different time format. And then things became even weirder as I realised that all the shortcuts on my desktop and in the Start Menu were missing. In fact, it was almost as if someone broke into my house and reimaged my whole hard drive. Strangely enough, all the software and the data was still there, tucked away in C:\Program Files
and its friends, it's just that Windows wasn't aware of it. Thankfully, running a System Restore worked and the next update didn't have these problems, but since then I stopped allowing automatic updates. Except there's no way I can figure out what a given update does anyway.
I'm really scared of my phone right now. Here it is, lying by my side, and I've no idea what's going on in its mind and what sort of information it's transferring and to where. When I was a kid and got gifted my father's old Nokia 6600, I was excited about having a fully fledged computer in my pocket, something with a real multitasking operating system that I could easily write software for. Now, I'm no longer so sure.
Imagine if we could turn this:
into this:
The first picture is a graph of how many people enter the London Underground network every minute on a weekday. The second graph is for the weekend, except slightly altered: I normalized it so that both graphs integrate to the same value. In other words, the same amount of people go through the network in the second graph as in the first graph.
Would you rather interact with the former or the latter usage pattern?
The data geek in me is fascinated at the fact that there are clear peaks in utilization at about 8:15 (this is the graph of entrances, remember) and, in the evening, at 17:10, 17:40 and 18:10. I'll probably play with this data further, since the dataset I used (an anonymized 5% sample of journeys taken on the TfL network one week in 2009) has some more cool things in it.
The Holden Caulfield in me is infuriated at the fact that these peaks exist.
It's alarming how often society seems to hinge on people being in the same place at the same time, doing the same things. The drawbacks of this are immense: infrastructure has to be overprovisioned for any bursty load pattern and being inside of a bursty load pattern results in higher waiting times and isn't a pleasant experience for everyone involved. Hence it's important to investigate why this happens and whether this is always required.
Have you heard of TV pickups? Whenever a popular TV programme goes on a commercial break or ends, millions of people across the UK do the same things at the same time: they turn kettles on, open refrigerator doors, flush their toilets and so on. This causes a noticeable surge in utilization of, say, electric grids and the sewage system. As a result, service providers have to provision for it by trying to predict demand. This isn't just an academic exercise: in the case of electric energy, generators can't be brought online instantly and energy can't be stored cheaply.
In the case of the Underground network, there are times on some lines where trains arrive more frequently than every two minutes (pretty much as often as they can, given that the trains have to maintain a safe distance between each other and spend some time on the platform) and yet they still are packed between 8am and 9am. Any incident, however small, like someone holding up the doors, can result in a knock-on effect, delaying the whole line massively.
Why are people doing this to themselves?
The weekend was a great invention (although Henry Ford's reason for giving his employees more time off was that they'd have nothing to do and hence start buying his own, and other businesses', goods). But does the weekend really have to happen at the same time for all people?
Some of the phenomena governing people's schedules are natural. It does get dark at night and people do need light. It gets cold in the winter and people need heating. But the Earth does not care whether it's the weekday or the weekend, a Wednesday or a Saturday. And yet somehow the society has decreed that Wednesday is a serious business day and any adult roaming the streets during daytime on that day might get weird stares.
Expanding on this, do working hours have to happen at the same time either? People naturally need rest, but what they don't naturally need is to be told when exactly they can work and rest. And in some types of work, like knowledge work, being told when to work is not necessary and even can be harmful.
In professional services, in most cases, the client doesn't care when the service is being performed. The client wants a tax return to be prepared: they don't want the tax return to only be prepared between 9 and 5. The client wants their investments to be managed: the investments don't need to only be managed between 9 and 5. And so on. Fixed work hours make no sense since it's not time the client is buying, it's the result. Knowledge work isn't predicated on people having to do it at the same time or even at a given time.
The fact that everybody has to work fixed hours hails from the Industrial age assembly line thinking (in fact, the term "line manager" is still used in the UK to refer to one's boss). If one part of the assembly line is missing, the assembly line doesn't work. Hence the management has to make sure that all parts of the assembly line have finished their sandwiches and are in place for when their shift starts. The whole shift also has to get their days off synchronously, as it can't function at all after a critical mass of people has taken the day off.
This is in no way an argument for longer working hours. If a person has exhausted their working capacity for the day, what's the point of holding them in the office until a given hour unless they're in a role that requires that? Some people work better when they have a set goal and some time to achieve it, to be used at their discretion. Some people work in bursts, where the output of one day can overshadow the rest of the week. Mandating fixed hours for knowledge workers means they aren't as efficient as they can be for their employer and further suffer from the utilization peaks that they themselves cause.
Do we still need offices? Some criticize working from home as a way for employees to slack off. But if you think your people won't work unless they're watched, maybe you're hiring the wrong people. A loss of productivity from not having someone standing over their shoulder is offset by the gain in productivity from not having someone standing over their shoulder and not working in a distracting open office environment.
A benefit of offices is that they encourage communication and sharing of ideas. It's much easier to walk up to someone and ask them something, and information travels around quicker and more naturally.
On the other hand, imagine if you were a medieval scholar. They would usually work alone, with all communication with their peers done over long-form letters. Communication used to be asynchronous and there was no way the letter would be delivered as soon as it was fired off, hence there was no expectation of getting a reply in the same hour or even within the same day.
Nowadays, people are expected to respond to messages instantly, which means they have less and less uninterrupted time in which they can't be distracted.
Would you rather have a 1-hour chunk of time to do work in or 6 chunks of 10 minutes, interrupted by random phone calls, instant messenger pings and people walking up to you? The former option is much, much better if you want to do any deep work. Productivity is highly non-linear and 10 minutes of work result in better outcomes when they come after some time to ramp up. Even the anticipation that you can be interrupted can distract you and prevent you from getting into a state of flow.
Perhaps there's no need for people in the workplace to expect others to be able to instantly respond to them. In fact, slower, asynchronous communication can lead to more robust institutional memory inside of an organisation. Instead of the easy fix of tapping a colleague on the shoulder to get an answer, the worker might instead devise a solution for an issue themselves or figure it out while typing up an email, adding to the documentation and making sure fewer people have that question in the future.
Do all meetings have to happen at the same place or at the same time? Some of them do: sometimes there's no replacement for getting all stakeholders in the same room in order to come to a decision. But meetings are also a great way to waste company money, setting thousands of dollars on fire by the simple act of blocking out one hour of several people's time.
What is now a synchronous meeting (together with the flow breakage than that brings: I found that I'm more productive in a given hour if I know I don't have to go anywhere in the next hour even though the time I'm spending is the same) could be an asynchronous e-mail chain or a set of comments on the intranet that people can get to at their discretion.
There's something mesmerising about being able to watch live coverage of an event. Instant notifications of a new development are a way to gratify yourself, feel like you've done something, get a small dopamine rush from getting another nugget of information. But in reality, not much has changed and this development will likely be insignificant in the end.
In this age, people have no time to think about their reaction: everything is knee-jerk, synchronous and instantaneous. An incident happens. Minutes later, we find out there is a suspect. Minutes later, there's a witch hunt across social media for the suspect and their family. Days later, the suspect is acquitted and there's another suspect. Information, not necessarily valuable or true, nowadays travels so fast that things can easily get out of control and anyone with Internet access can join in the madness.
There are a few billion times more people than you and your brain can't process inputs from them all in real time. Hence people have to operate with abstractions. Instead of constantly receiving a stream of data that interrupts your life and ultimately doesn't add anything to it, why not go to a different abstraction level and lower the sampling rate, instead reading a weekly newsletter?
If you had an investment portfolio, would you act based on looking at its performance every hour or every day? Or would you instead be aware that all the noise from the daily developments will probably cancel itself out and turn into a clearer picture of what's happened?
A friend of mine works in a role where she needs to interact with offices in other countries that don't maintain UK bank holidays. What her employer does is increase her holiday allowance instead, making bank holidays a normal working day. I think this is amazing. The time when most people are away on holiday is the best time to get some work done in the office and the time when most people are in the office is the best time to go shopping, visit doctors, go to a museum and do all other sorts of other life admin things.
From a cultural point of view, public holidays are amazing. From a logistical point of view, they're a nightmare. If everybody is having a holiday, nobody is, and the fact that everyone is observing the holiday at the same time yet again creates usage peaks in all sorts of places.
For example, synchronous buying of presents means that retailers have to overstock their wares in run-up to major holidays, say, Christmas, and, worse even, have to offload all the Baylis and Harding soap sets at fire sale prices starting at about dinnertime on the 24th December. Hilariously, the best time to shop for Christmas presents is in January.
And most holidays are taken around these times too. People in the UK get fined and can even get prosecuted for taking their children on holiday during term time. The fine is usually less than the difference in price for airline tickets and accommodation between taking a holiday during term time and outside of term time, but that price difference is just a consequence of the difference in demand between those times. There are still planes in January, but they're... emptier. And airports aren't such an unpleasant experience.
In any case, most parents are now coerced to take holidays only outside term time, which has a knock-on effect on flight/accommodation usage and prices.
I honestly don't know how to solve most of these problems. Maybe with the rise of remote work and teleconferencing this will naturally go away, moving us to a future where nobody can have a case of the Mondays any more. Some companies are embracing parts of being asynchronous already, like Basecamp (ex-37 Signals) who list the benefits of remote work and fewer meetings in REWORK.
It's more difficult on the social side. I dream of a relationship where we agree to celebrate all holidays (Christmas, Easter, Valentine's etc.) a few days later to take advantage of the trough in the demand that comes after the peak. In addition, to be efficient, education indeed has to be synchronous: one teacher educates multiple children and any of them skipping material will result in them having to catch up, delaying the whole year. I had a chat about this with a friend once: what if education were much more granular, with children (or their parents) being able to pick and choose when their child takes a given class? Staggered school shifts, perhaps?
Previously on project Betfair, we gave up on market making and decided to move on to a simpler Betfair systematic trading idea that seemed to work.
Today on project Betfair, we'll trade it live and find out it doesn't really work.
With the timezone bug now fixed, I was ready to let my bot actually trade some greyhound races for a while. I started trading with the following parameters:
I decided to pick the favourite closer to the trade since I thought that if it were picked 5 minutes before the race starting, it could change and there was no real point in locking it in so long before the actual trade. The 60s exit point was mostly to give me a margin of safety to liquidate the position manually if something happened as well as in case of the market going in-play before the advertised time (there's no in-play trading in greyhound markets, so I'd carry any exposure into the game. At that point, it would become gambling and not trading).
So how did it do?
Well, badly for now. Over the course of 25 races, in 4 hours, it lost £5.93: an average of -£0.24 per race with a standard deviation of £0.42. That was kind of sad and slightly suspicious too: according to the fairly sizeable dataset I had backtested this strategy on, its return was, on average, £0.07 with a standard deviation of £0.68.
I took the empirical CDF of the backtested distribution of returns, and according to it, getting a return of less than -£5.93 with 25 samples had a probability of 1.2%. So something clearly was going wrong, either with my backtest or with the simulation.
I scraped the stream market data out of the bot's logs and ran the backtest just on those races. Interestingly, it predicted a return of -£2.70. What was going wrong? I also scraped the traded runners, the entry and the exit prices from the logs and from the simulation to compare. They didn't match! A few times the runner that the bot was trading was different, but fairly often the entry odds that the bot got were lower (recall that the bot wants to be long the market, so entering at lower odds (higher price/implied probability) is worse for it). Interestingly, there was almost no mismatch in the exit price: the bot would manage to close its position in one trade without issues.
After looking at the price charts for a few races, I couldn't understand what was going wrong. The price wasn't swinging wildly to fool the bot into placing orders at lower odds: in fact, the price 160s before the race start was just different from what the bot was requesting.
Turns out, it was yet another dumb mistake: the bot was starting to trade 150s before the race start and pick the favourite at that point as well. Simulating what the bot did indeed brought the backtest's PnL (on just those markets) within a few pence from the realised PnL.
So that was weird: moving the start time by 10 seconds doubled the loss on that dataset (by bringing it from -£2.70 to -£5.93).
There was another issue, though: the greyhound markets aren't that liquid.
While there is about £10000-£15000 worth of bets available to match against in an average greyhound race, this also includes silly bets (like offering to lay at 1000.0).
To demonstrate this better, I added market impact to the backtester: even assuming that the entry bet gets matched 160s before the race (which becomes more difficult to believe at higher bet amounts, given that the average total matched volume by that point is around £100), closing the bet might not be possible to do completely at one odds level: what if there isn't enough capacity available at that level and we have to place another lay bet at higher odds?
Here's some code that simulates that:
def get_long_return(lines, init_cash, startT, endT, suspend_time,
market_impact=True, cross_spread=False):
# lines is a list of tuples: (timestamp, available_to_back,
# available_to_lay, total_traded)
# available to back/lay/traded are dictionaries
# of odds -> availablility at that level
# Get start/end availabilities
start = get_line_at(lines, suspend_time, startT)
end = get_line_at(lines, suspend_time, endT)
# Calculate the inventory
# If we cross the spread, use the best back odds, otherwise assume we get
# executed at the best lay
if cross_spread:
exp = init_cash * max(start[1])
else:
exp = init_cash * min(start[2])
# Simulate us trying to sell the contracts at the end
final_cash = 0.
for end_odds, end_avail in sorted(end[2].iteritems()):
# How much inventory were we able to offload at this odds level?
# If we don't simulate market impact, assume all of it.
mexp = min(end_odds * end_avail, exp) if market_impact else exp
exp -= mexp
# If we have managed to sell all contracts, return the final PnL.
final_cash += mexp / end_odds
if exp < 1e-6:
return final_cash - init_cash
# If we got to here, we've managed to knock out all price levels
# in the book.
return final_cash - init_cash
I then did several simulations of the strategy at different bet sizes.
Turns out, as we increase the bet size away from just £1, the PnL quickly decays (the vertical lines are the error bars, not the standard deviations). For example, at bet size of £20, the average return per race is just £0.30 with a standard deviation of about £3.00 and a standard error of £0.17.
At that point, I had finally managed to update my new non-order-book simulator so that it could work on horse racing data, which was great, since horse markets were much more preferable to greyhound ones: they were more liquid and there was much less opportunity for a single actor to manipulate prices. Hence there would be more capacity for larger bet sizes.
In addition, given that the spreads in horses are much tighter, I wasn't worried about having a bias in my backtests (the greyhound one assumes we can get executed at the best lay, but most of its PnL could have come from the massive back-lay spread at 160s before the race, despite that I limited the markets in the backtest to those with spreads below 5 ticks).
I backtested a similar strategy on horse data but, interestingly enough, it didn't work: the average return was well within the standard error from 0.
However, flipping the desired position (instead of going long the favourite, betting against it) resulted in a curve similar to the one for the greyhound strategy. In essence, it seemed as if there was an upwards drift in the odds on the favourite in the final minutes before the race. Interestingly, I can't reproduce those results with the current, much larger, dataset that I've gathered (even if I limit the data to what I had at that point), so the following results might be not as exciting.
The headline number, according to my notes at that time, was that, on average, with £20 lay bets, entering at 300 seconds before the race and exiting at 130 seconds, the return was £0.046 with a standard error of £0.030 and a standard deviation of £0.588. This seems like very little, but the £20 bet size would be just a start. In addition, there are about 100 races every day (UK + US), hence annualizing that would result in a mean of £1690 and a standard deviation of £112.
This looked nice (barring the unrealistic Sharpe ratio of 15), but the issue was that it didn't scale well: at bet sizes of £100, the annualized mean/standard deviation would be £5020 and £570, respectively, and it would get worse further on.
I also had found out that, at £100 bet sizes, limiting the markets to just those operating between 12pm and 7pm (essentially just the UK ones) gave better results, despite that the strategy would only be able to trade 30 races per day. The mean/standard deviation were £4220 and £310, respectively: a smaller average return and a considerably smaller standard deviation. This was because the US markets were generally thinner and the strategy would crash through several price levels in the book to liquidate its position.
Note this was also using bet sizes and not exposures: so to place a lay of £100 at, say, 4.0, I would have to risk £300. I didn't go into detailed modelling of how much money I would need deposited to be able to trade this for a few days, but in any case I wasn't ready to trade with stakes that large.
One of the big issues with live trading the greyhound DAFBot was the fact that the bot can't place orders below £2. Even if it offers to buy (back), say, £10 at 2.0, only £2 of its offering could actually get matched. After that point, the odds could go to, say, 2.5, and the bot would now have to place a lay bet of £2 * 2.0 / 2.5 = £1.6 to close its position.
If it doesn't do that, it would have a 4-contract exposure to the runner that it paid £2 for (will get £4 if the runner wins for a total PnL of £2 or £0 if the runner doesn't win for a total PnL of -£2).
If it instead places a £2 lay on the runner, it will have a short exposure of 2 * 2.0 - 2 * 2.5 = -1 contract (in essence, it first has bought 4 contracts for £2 and now has sold 5 contracts for £2: if the runner wins, it will lose £1, and if the runner loses, it will win nothing). In any case, it can't completely close its position.
So that's suboptimal. Luckily, Betfair documents a loophole in the order placement mechanism that can be used to place orders below £2. They do say that it should only be used to close positions and not for normal trading (otherwise people would be trading with £0.01 amounts), but that's exactly our use case here.
The way it's done is:
I started a week of fully automated live trading on 2nd October. That was before I implemented placing bets below £2 and the bot kind of got lucky on a few markets, unable to close its short exposure fully and the runner it was betting against losing in the end. That was nice, but not exactly intended. I also changed the bot to place bets based on a target exposure of 10 contracts (as opposed to stakes of £10, hence the bet size would be 10 / odds).
In total, the bot made £3.60 on that day after trading 35 races.
Things quickly went downhill after I implemented order placement below £2:
In total, the bot lost £9.27 over 172 races, which is about £0.054 per race. Looking at Betfair, the bot had made 395 bets (entry and exit, as well as additional exit bets at lower odds levels when there wasn't enough available at one level) with stakes totalling £1409.26. Of course, it wasn't risking more than £15 at any point, but turning over that much money without issues was still impressive.
What wasn't impressive was that it consistently lost money, contrary to the backtest.
At that point, I was slowly growing tired of Betfair. I played with some more ideas that I might write about later, but in total I had been at it for about 2.5 months and had another interesting project in mind. But project Betfair for now had to go on an indefinite hiatus.
To be continued...
Enjoyed this series? I do write about other things too, sometimes. Feel free to follow me on twitter.com/mildbyte or on this RSS feed! Alternatively, register on Kimonote to receive new posts in your e-mail inbox.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here or the beta here and/or follow it on Twitter as well at twitter.com/kimonote!
This was a weird year. In the world, we experienced the first year of a Trump presidency and Britain's first attempts at trying to leave the European Union. On my side, I left my first job in the first half of this year and spent the second one dicking around, eventually starting Kimonote (pleasetrythebetaitsreallygood) as well as trying to trade on Betfair, settling on writing blog posts about it instead (those who can, do etc).
The genre of this year for me was post-punk. Let's say punk is this angry teenager that feels like there's something wrong with the world but can't exactly express it, so they instead spend most of their time screaming in angst. Post-punk is what happens when the teenager grows up in a certain way, not letting go of their hatred for those who subjugated them but instead subduing it, making it more sophisticated, knowing exactly when to strike and what to say.
If Richard Branson on a boat together with Sex Pistols in 1977 blasting out "God Save The Queen" opposite the Houses of Parliament is punk, then Richard Branson in 2017 suing the NHS is post-punk.
Here is what I think are the 24 best albums I listened to for the first time this year. They don't really map to months nicely (and in fact a majority of them come from the first half of the year), but 24 was a good balance between "I can't write anything about this album, why is it here?" and "How come this one doesn't make the list?"
Let's begin, shall we.
Ride (1991)
Shoegaze, Dream Pop, Neo-Psychedelia
RYM link
She knew she was able to fly,
Because when she came down,
She had dust on her hands from the sky,
She felt so high, the dust made her cry.
It's shoegazey. Given that I didn't like Loveless that much (I know, blasphemy, but seriously, it always makes me feel like something's gone wrong with my earphones and one channel is louder than the other one) and Slowdive's Souvlaki has already made a best-albums-of list, this is another one of those.
It's OK.
Let's stay here for a while,
Eyes so round and bright we gently smile.
Live for the moment, not the past,
Why do we always fall so fast
Notable track: Polar Bear
Love and Rockets (1986)
Alternative Rock, Neo-Psychedelia, Post-Punk
RYM link
All expenses paid, courtesy of NASA
Thank you, Mr. President for my holiday, Sir
I couldn't really say that I wish you were here
But thank you all the same, Sir
Not even sure how it made it here. The album is worth listening to only for the psychedelic undertones in "Kundalini Express" that reminded me of Rush's "Passage to Bangkok", a song about travelling the world and smoking everything you can lay your hands on. Oh, actually, the sheer melancholy of the "All In My Mind" at the end (but the acoustic version, since it's in a minor scale) makes it one of my favourite songs of this year. Hmm, the intro to "Holiday On The Moon" could be suited to a slo-mo scene in some action movie where they're, like, driving in a car to a meetup knowing that they won't make it out alive. Oh, and I think "It Could Be Sunshine" is catchy.
I guess I now know how.
You want to rip all the jewels off all the idiots' backs so badly
You scream, "Give me what I've always missed, give me a good time!"
But if you look into your mirror, you'll see that nobody
Has ever ripped you off, it's all in your mind
Notable track: Holiday on the Moon
Wall of Voodoo (1982)
New Wave, Post-Punk
RYM link
Driving out of Vegas in their automobile
She was in the backseat while he was at the wheel
With the windows wide open
All the money from the store they'd gambled away
Of course I got to this album by accidentally leaving Youtube autoplay for too long and getting to listen to "Mexican Radio", which was their one-hit wonder? No, they were a one-hit wonder and "Mexican Radio" was their one hit?
Anyway, I was listening to this song and thought it was kind of cool, a mixture of twangy Tex-Mex guitars and synthesisers, something that could potentially fit as a soundtrack to Cowboy Bebop or Firefly.
And then I heard the singer's voice, which I could recognize anywhere. Turns out, before Stan Ridgway had a solo career and produced things like "Camouflage" and "The Big Heat", he had a career with a band and released a couple of albums. "Call of the West" is one of them, and it's not all synthpop and rainbows. From "Tomorrow" (a song about procrastination) to "Lost Weekend" (about a couple that gambles all their savings away every weekend), it's about the flip side of the American Dream, a set of stories about trying to find a better life and not managing to.
Now I've brought the same piece of chicken in a bag to work everyday
For the last twenty years or so
And I really don't mind, work assembly line
Got an intercom blasting the news and the latest on the baseball scores
Come around every Friday, well I get a paycheck
Take the same road home that I come to work on, heck, it's a living
Notable track: Lost Weekend
Nick Drake (1972)
Contemporary Folk, Singer/Songwriter, Folk Baroque
RYM link
I saw it written and I saw it say
Pink moon is on its way
And none of you stand so tall
Pink moon gonna get ye all
Like many artists, Nick Drake had made a career-limiting move of getting a Third from Fitzwilliam College, Cambridge, which led to his work not being discovered until he started being quoted as an influence by R.E.M. and until one track was featured in a Volkswagen advert in 1999, at which point it was too late since he had died in 1974.
The album? The album is very peaceful. There's only Nick and his guitar and, for 10 seconds, a piano, but the songs are still quite varied. It's very summery, somehow. It reminds me of a road trip in the Midwest that I never went on.
Lifting the mask from a local clown, feeling down like him
Seeing the light in a station bar, and travelling far in sin
Sailing downstairs to the Northern Line, watching the shine of the shoes
Hearing the trials of the people there, who's to care if they lose?
Notable track: Pink Moon
Martin Dupont (1985)
Minimal Synth, Coldwave, New Wave
RYM link
I met the beast from the end of century
With its Fu Manchu mustache
Barbie dolls whispering in the lagoon
Physically sick every time they kiss
This is exactly what I imagined French Kraftwerk would sound like. Still cold, still with sometimes overly experimental rhythmic patterns, still with few (and not making much sense) words, but much, much more energetic and with more female vocals.
It also reminds me of Simple Minds' Empires and Dance (which also had made a best-of list at some point). In fact, it was released in the same year. And the vocals are similar too. And Marseille is close to Glasgow. I mean, compare the first 30 seconds of Martin Dupont's Hunted and Simple Minds' I Travel. It makes me wonder why I never saw Alain Seghir and Jim Kerr in the same room.
The only issue is that the album is way too short. But don't worry, there will be more Martin Dupont later on.
Not waiting for tomorrow...
Notable track: I Met The Beast
Aphex Twin (1992)
Ambient Techno, IDM, Ambient
RYM link
pew pew pew boom pew-pew pew pew pew boom
What can I say? It's ambient techno. It's really good ambient techno. If you're having trouble doing any sort of vaguely menial work (like collating this list), put it on and watch time fly. I don't actually know or remember or care when individual tracks begin and end on this album. Apparently, there's even a track called "i" on there. How cool is that?
beep boop-boop (pow) beep-beep boom-beep (pow)
Notable track: Heliosphan
AIR (1998)
Downtempo, Ambient Pop
RYM link
Où sont tes héros
Au corps d'athlète?
Où sont tes idoles
Mal rasés, bien habillés?
If I ever own an elevator, "La Femme D'Argent" will be playing there on repeat. In addition, as per its music video that collates some behind-the-scenes stories about making of the album:
"For a video of ['All I Need'], Mike Mills decided to shoot the story of a real couple in Ventura, California. Unfortunately, they broke up since..."
Uuh, where was I. Oh yes. It's a quite good electronic ambient thing. Probably not all of it is suitable for playing in elevators, especially not Sexy Boy. Although the reason I know of this album is because I heard Sexy Boy in Tommi's Burger Joint on Thayer Street in Marylebone. It's really good! I especially recommend their Offer of the Century, which is basically a meal deal with beer. The burger is amazing and you get a pick of like twenty different sauces, pickles and peppers. Well worth the money.
The reason this is not higher is because I never managed to get into the second half of the album.
Kelly, watch the stars
Kelly, watch the stars
Kelly, watch the stars
Kelly, watch the stars
Notable track: Sexy Boy
The Replacements (1987)
Alternative Rock, Power Pop
RYM link
Priest kneels silent, all is still
Policeman reaches from the sill
Watch him try to try his best
There'll be no medal pinned to his chest
I found this one like I did quite a lot albums: from a TV series. In particular, "The Ledge" was featured in Billions (which is weird: the song is about a suicide. I guess the showrunners really liked the lyrics "I'm the boy they can't ignore"). And the rest of the album has some fairly cool songs in it too.
Runnin' 'round the house, Mickey Mouse and the Tarot cards.
Falling asleep with a flop pop video on.
If he was from Venus, would he meet us on the moon?
If he died in Memphis, then that'd be cool, babe.
Notable track: Alex Chilton
Heart (1976)
Pop Rock, Rock
RYM link
Heading out this morning into the sun
Riding on the diamond waves, little darlin' one
Warm wind caress her, her lover it seems
Oh, Annie, dreamboat Annie, little ship of dreams
Yes, it's kind of too lighthearted for this list. But Ann Wilson's vocals! And Nancy Wilson's acoustic guitar! And the harmonics! And the seventies! And silk shirts! And a guitarist that looks like Luke Skywalker!
I still don't get why there are several versions of "Dreamboat Annie" on this, but I'm not complaining. They're all good. All three of them.
I was a willow last night in my dream
I bent down over a clear running stream
Sang you the song that I heard up above
And you kept me alive with your sweet flowing love
Notable track: Crazy On You
The KLF (1990)
Ambient, Field Recordings, Plunderphonics, Ambient House
RYM link
For the next thirty minutes I'm going to give you a special phone number where you can call me so that I can send you a special gift this week. Get your paper and pen ready. Now, here's the service already in progress...
This album reminds me of Primal Scream's Screamadelica (which was later parodied/covered/adapted by Alabama 3, the first song from which, Woke Up This Morning, was used in the opening of Sopranos, which is a great TV series. None of these facts are relevant here.): it's essentially a concept album about a long night out.
Well, in this case, it's a concept album that follows the protagonist on a night-time drive from Texas to Louisiana. And it fully reflects the atmosphere of being alone on the road in the night, with only random news broadcasts, late-night infomercials and echoes of club music in your mind to accompany you, overlayed on top of dreamy Pink Floyd-like guitars and, in one track, an Elvis Presley song.
Seventeen-year-old Jack Acksadapo was driving home to Belmore last night after finishing work at his father's Lindencrest diner in Lindenhurst. According to Nassau homicide sergeant John Nolan, witnesses saw Acksadapo drag racing with another car along Merrick Road in Wantagh. Nolan says the young man lost control and slid into a row of stores. His body was pulled from the car by a passing motorist after which the car, in flames...
Notable track: Madrugada Eterna
Manuel Göttsching (1984)
Progressive Electronic
RYM link
beep beep boop-boop beep
Unlike Aphex Twin's album, this one is basically a single track (well, there is a tracklist and the whole recording depicts a chess game, but they blur into one glorious piece) that starts with a simple synthesizer pattern which evolves to be more and more complex, with more and more instruments and effects layered on top of it. Think Brian Eno's "Music for Airports", but much, much more lively and not for airports and with the second half introducing some fairly neat guitar solos.
beep beep whoom whoom
Notable track: Queen A Pawn
Solid Space (1982)
Minimal Synth, Minimal Wave
RYM link
See them getting desperate, see their logic fade
See them panic and whisper, see their edges fray
And now their planet burns up into thin air
We knew we'd kill them anyway, we didn't really care
I have this thing where I strongly associate some albums with certain periods of my life. Mazzy Star's "So Tonight That I Might See", for example, is associated with me waking up one Sunday and taking a Metropolitan Line train from Baker Street to Finchley Road in order to go to Homebase near there and buy some Rentokill.
This one is associated with me trying to defrost my freezer, which basically had filled itself with a massive block of ice. At first, I thought I would just turn the temperature down a bit, but sadly the temperature control had ice over it as well. So one day, when there wasn't much food in the fridge, I unplugged it completely, covered the surroundings with some towels and pans and the next morning woke up to a loosened block of ice.
During the next half an hour while this album was playing, I managed to dislodge it, get it out and melt it down. The only thing I found in there were the ice trays.
Oh yes, the album. It's a collection of fairly short, melancholic and diverse songs with minimalist tasteful lo-fi synthesisers and speech samples from space-themed films and cartoons. This is pretty much what people in the 1980s imagined life in 2017 would be.
We thought that you'd all disappeared
We wondered what the Italians feared
You slept for eons in your tomb
And shaped it as a second womb
Notable track: Destination Moon
Martin Dupont (1987)
Coldwave, Minimal Synth, Synthpop, New Wave
RYM link
The time of the month is right
To feel full moons and mouths
She's pulling, she's pulling
The force of her paleness is drowning me
Welcome back to Martin Dupont. Everything from the previous album still applies.
There's something Bryan Ferry-esque as well about Alain Seghir's vocals, especially audible on the bonus tracks (like "Love On My Side". In fact, all the bonus tracks are really good), but that's where the similarity with Roxy Music ends. The lyrics are still supremely weird and the fact that in some songs they are repeated several times doesn't really help. In a way, this reminds me of Cocteau Twins, where Liz Fraser's words are mostly made up on the spot and the voice is just another instrument, not the means to tell a story.
Oh yes, here's a quote from Alain that I wanted to share:
“Just a little story about the “duponettes”: I was Catherine [Loy, vocal/synth, left the band by the time Hot Paradox was released]’s boyfriend when I started MD but when I met Brigitte [Balian, vocal/bass/guitar] I was so impressed by her voice and attitude that I became shortly her lover but Catherine was so sweet and so happy with the band that I went back with her, then she met a funny english girl that she wanted absolutely to introduce me, so I felt in love with this lovely sparkling stuff, so Catherine didn’t want to stay in the band and I married Beverley [Jane Crew, vocal/clarinet/sax] 2 months later.”
I should start a band.
I feel as if the world is falling down between my legs
I feel as if I could jump over the Berlin Wall.
Notable track: Inside Out
Sparks (2006)
Art Pop, Chamber Pop
RYM link
All I do now is dick around
When the sun goes up and the moon goes down
When the leaves are green and the leaves are brown,
All I do now is dick around.
This might be one of the weirdest albums I've heard this year. Starting from a CEO who was dumped, decided to resign and is now spending his time on his "hobbies" ("Dick Around") to an organ player in Paris who picked up that job just to pick up young women ("As I Sit Down to Play the Organ at the Notre Dame Cathedral"), from a startlingly long list of female names and perfume brands (that don't repeat!) in "Perfume" ("Geneviève wears Dior, Margaret wears Trésor, Mary Jo wears Lauren, But you don't wear no perfume, Deborah wears Clinique, Marianne wears Mystique, Judith wears Shalimar, But you don't wear no perfume") to an ode to metaphors because "chicks dig metaphors", it's full of stories about... love?. Well, an over-the-top, bombastic and cheesy interpretation of love, complete with a deadpan delivery and random sprinkles of vocal harmonisations and parts of a symphonic orchestra the Mael brothers probably kidnapped from somewhere.
This all makes you doubt what they meant, when and whether. Are half of the song names just innuendos? ("Dick Around"? "Baby, Can I Invade Your Country"? "Rock, Rock, Rock"? "Here, Kitty"? "As I Sit Down to Play the Organ..."?) I guess there are rabbits on the album cover. I should have known.
She points up to the high-wire, there a tiger stands
"Oh help me, help me, bring my tiger down, dear man"
"If you will save him, I will be yours every night"
I climb the pole and look the tiger in the eye
Notable track: Dick Around
Love (1967)
Psychedelic Pop
RYM link
By the time that I'm through singing
The bells from the schools of walls will be ringing
More confusions, blood transfusions
The news today will be the movies for tomorrow
It's about... everything? Everything at the end of the sixties, the hippie lifestyle, the Vietnam War, the omnipresent psychedelia, the cautious optimism. There is some flamenco, trumpets, trombones and an occasional overdriven guitar that sneaks up on you. I'm not even sure how to describe this album without playing it back.
I don't know if the third's the fourth or if the
The fifth's to fix
Sometimes I deal with numbers
And if you wanna count me
Count me out
Notable track: A House Is Not A Motel
The Chameleons (1986)
Post-Punk
RYM link
We have no future, we have no past
We're just drifting ghosts of glass
Brown sugar, ice in our veins
No pressure, no pain
Welcome to The Chameleons. There will be more of them in this list. This one, sadly, didn't make it too far and I almost wanted to exclude it. The reason? There are 10 tracks on this album and I kind of only loved 3 or 4 of them and liked 4-5. Fine, I was okay with most of them, but not all and there wasn't really a big overarching theme there, unlike... You know what? I won't spoil it for you. It's still an amazing album. Just keep reading.
But most of you are much too ill
Oh, way beyond a surgeon's skill
In bondage to a dollar bill
What more can you buy, buy, buy
Notable track: Swamp Thing or Soul In Isolation
Nick Cave and The Bad Seeds (1988)
Post-Punk, Gothic Rock
RYM link
It began when they come took me from my home
And put me in Death Row
Of which I am nearly wholly innocent, you know
And I'll say it again:
I am not afraid to die
This album is great if only because of The Mercy Seat. Actually, a much better move would be putting The Mercy Seat at the end of the album so that people would have to listen to everything else before getting a reward. It's not that the rest of the album is bad, it's just that The Mercy Seat is so good.
What about the rest of the album? It takes you on a tour through all the best in human depravity. There's sex, murder, armed robbery and drug use. And to contrast, it tops everything off with a cheeky upbeat song.
Sugar sugar sugar
Honey, you're so sweet
But beside you, baby,
A bad man sleeps
Notable track: The Mercy Seat
Modern English (1982)
Post-Punk, New Wave
RYM link
I'll stop the world and melt with you
You've seen the difference and it's getting better all the time
There's nothing you and I won't do
I'll stop the world and melt with you
A post-punk record that accidentally made it into big time by having a genuine song about loving someone ("I Melt With You"). So cute, right? It's like they love each other so much that they become one person and it's just...
"According to vocalist Robbie Grey, the song is about a couple having sex as nuclear bombs fall"
...oh.
Every good 1980s album has to be about the nuclear winter. It starts with "Someone's Calling" ("Turning 'round as if in flight / I sense your breath cut like a knife / A thousand shadows all in pain / What they fear must be the same") and continues with "Life in the Gladhouse" which could be argued to be about spending time in a nuclear shelter.
But despite the theme, it's kind of halfway between joyful and melancholic, with occasional interesting vocal and rhythmic experiments like here.
I stood and watched the dark sky rise
With glaring sunlight in my eyes
I thought of all the times gone by
And laughed aloud at the crimson sky
After the snow
Notable track: Someone's Calling
Nick Cave and The Bad Seeds (2016)
Art Rock
RYM link
And now she's jumping up with her leaping brain
Stepping over heaps of sleeping children
Disappearing and further up and spinning out again
Up and further up she goes, up and out of the bed
Up and out of the bed and down the hall where she stops for moment and turns and says
"Are you still here?"
And then reaches high and dangles herself like a child's dream from the rings of Saturn
This is different from Nick Cave and The Bad Seeds' previous records. Hauntingly beautiful, more ambient and synthetic, its lyrics were amended by Cave shortly after his son's death in 2016. Skeleton Tree will probably remain my favourite album of the 21st century (at least until I can muster enough courage to listen to Blackstar or You Want It Darker).
And most of it is not really... songs? Not in the normal sense of the word. It's mostly Nick's stream of consciousness, half-rapping, half-singing about loss, grief, pain and death, but instead of youthful bravado like in the band's earlier albums (like, again, Tender Prey), in this case it's the voice of a man who has experienced all these things. Or at least that's what I think. I am full of youthful bravado too. Maybe that's what all old people sound like.
They told us our gods would outlive us
They told us our dreams would outlive us
They told us our gods would outlive us
But they lied...
Notable track: Rings Of Saturn
The Sound (1981)
Post-Punk, Gothic Rock
RYM link
I was gonna drown
Then I started swimming
I was going down
Then I started winning, winning
What can I say about this? It's gothy, existential, dark. But there are almost no distorted guitars here. Just a synthesiser with some juicy bass guitars and occasional thundering percussion. Most of the songs are slow, creeping, enveloping you into a gloomy fog from which you occasionally emerge into a more or less energetic track just to realise that Adrian Borland's lyrics and vocal delivery are still there and not going away just because the song is a bit faster.
Did I say I like those drums? I want to get ones that sound throughout the beginning of "New Dark Age" so that I can bang on them all day.
Some make a quiet life
To keep this scared old world at bay
The dogs are howling on the street outside
So they close the curtains, hope they go away
Notable track: Judgement
Sad Lovers & Giants (1983)
Post-Punk, Dream Pop, Gothic Rock
RYM link
She'll swear some weak excuse to gain more time
Changing sides like friends to satisfy her quicksand ego
When life falls short again, she'll crawl away
What is going on on the album cover? I'm torn between it being a view of some mountains, a beach or a puddle with some trees reflecting in it.
It doesn't matter. This is post-punk too, but unlike The Sound's take on post-punk, this one is much more surreal. The guitars are way way more present here and so are the ethereal sound effects, and so is the occasional dissonance. And the lyrics aren't about how sad and terribly depressing everything is (despite their name), but are just about... stuff. One's about cowboys, one's about going to sleep, one's about a man of straw. There is a love song and this time it's not about dying in a nuclear apocalypse together.
Hearing a distant bell repeatedly ring
It calls for attention to your daily life routine
"I'm asking no favours, just five minutes more"
Ringing, the bell replies, "Your plea has been ignored."
Notable track: On Another Day
Nick Cave & The Bad Seeds (1994)
Post-Punk, Alternative Rock
RYM link
In my bed she cast the blizzard out
A mock sun blazed upon her head
So completely filled with light she was
Her shadow fanged and hairy and mad
The opening track of this album was pretty much one of the first songs I listened to in 2017. And what a glorious beginning to a year it was. It starts with just a bassline that, after a few bars, gives way to a mixture of organs, guitars, a piano and Nick's voice.
This album is sick. I guess it's the antithesis to that Sparks album a few rungs below. It's about love, but instead of a bombastic parody of love, it's about a different kind of love, creepy, aggressive, obsessive, compulsive, stalker-y. Visceral. It's about falling in love, getting addicted and then having your heart broken so hard you still keep finding pieces of it all over your flat.
Despair and Deception, Love’s ugly little twins
Came a-knocking on my door, I let them in
Darling, you’re the punishment for all my former sins
I let love in
Notable track: Do You Love Me?
The Chameleons (1985)
Post-Punk, Dream Pop, Gothic Rock
RYM link
With fading powers we dream of hours
That will never come again
Old defenders are themselves defenceless
When the mad attack the sane
Look at the album cover. Look at it. What does it make you think of? It's like a weird face floating in the sky. Light blue, airy. Are those horns or tape reels? Why is there a bird on one side?
This is exactly what this album sounds like. Like a light blue airy benevolent face in the sky made of tape reels and lines with a bird on one side. Mark Burgess', the vocalist and the songwriter, rough and desperate voice is overlaid on top of an interplay of two guitars, one doing familiar power chord progressions, and the other going completely mad with sound effects, delays, echoes, reverberation and panning. If you ever wondered what the clouds around that face sound like, that's what.
And if you're not into floating on a cloud of sound, you can always listen to the lyrics, and they are almost always non-spiritual. From "Intrigue in Tangiers" (about Mark's visits to a lonely former seaman in a retirement home) to "Singing Rule Britannia (While The Walls Close In)" (about "those people, at the time that included [Mark's] father, that would go on and on about how great The Conservative Party was, while their lives and the country in general were disintegrating around them." (hey, which modern phenomenon does that remind you of?)), from "Perfume Garden" (about the British school system) to "One Flesh" (about the subjugation of women), almost all of them are grounded in reality.
Endless emptiness, endless ringing bells
I couldn't show you but I'd hoped to one day
A pretty promise to teach the tender child
To welcome madness every Monday
Notable track: Perfume Garden
The Chameleons (1983)
Post-Punk, Gothic Rock, Dream Pop
RYM link
"In his autumn, before the winter, comes man's last mad surge of youth."
"What on earth are you talking about?"
I said to a coworker once that The Chameleons are "as if Cocteau Twins and Joy Division had a baby". It actually applies to "What Does Anything Mean? Basically" more than here, with ethereal tunes supporting dark lyrics. "Script of the Bridge", their first album, is more subdued while at the same time more energetic and aggressive. To be honest, I had a very hard time determining which one to put up first and looking at Last.fm didn't help either, as they both always stay within a few listens away from each other.
But this one is more coherent, I think, and more listenable to as a whole piece. It's about losing youth and innocence, and the trip starts with that quote above (sampled from the film "Two Sisters from Boston") in "Don't Fall", continues with the most powerful riff in the world and then segues into "Here Today" that is believed to be about John Lennon's shooting ("Don't know what happened but somebody lost their mind tonight / Not sure what happened but I don't think I got home tonight / There's blood on my shirt").
And so on. There's a song about nuclear disarmament (Up the Down Escalator, "Now they can erase us at the flick of a switch"), a song that reminds me of Pink Floyd's "Have A Cigar" ("High as you can go, Lennon to Munroe / Clawed their way to the stars / I think they knew / And I don't care who you are, just sign the line and away you fly") and a song about immortality and near-death experiences.
Each track is wonderfully diverse and at the same time enveloped in the same dreamy fog made of chorused guitars and delay pedals. And finally, the experience ends with my favourite song in the whole album.
Who cares,
Just wait until your time comes round again,
Again...
Notable track: View From A Hill
While I'm writer's-blocked on continuing with project Betfair (well, more like blocked on not being able to reproduce my old research results in order to fit the narrative I thought I had), here's a quick story from the development of Kimonote.
I got stuck following this guide on how to startup:
How to startup:
- Set up a landing page to collect e-mail addresses
- ??????
- ??????
and had figured out that the next step after collecting e-mail addresses was sending something to them. Except the issue was that I also wanted to make sure most e-mail providers didn't mark my mail as spam and it actually got delivered.
So here's a quick checklist on what to do in order to achieve that. This guide assumes that you have a dedicated IP address and a domain name whose records you can edit.
Or Postfix. This is the server that will run on your machine and, when a client requests it to send mail, will do so.
The following commands are all executed as root, assuming a Debian-like system (say, Ubuntu). Install exim:
apt-get install exim4
I used the "Single configuration file" option during the installation: the only change I needed to make to it was disabling IPv6. To do that, add disable_ipv6=true
under "Main configuration settings" in /etc/exim4/exim4.conf.template
and reload the config with /etc/init.d/exim4 reload
.
Try sending a test message:
echo "testing" | mail -s Testing (your personal email address)
You should get a message in your spam folder from root@(hostname, which is probably not your domain name)
.
Try again, this time setting the from address:
echo "testing" | mail -s Testing -aFrom:test@(your domain name) (your personal email address)
You should get an email from test@(domain name)
, still in your spam folder (if you're lucky). If you don't, check the exim logs at /var/log/exim4/mainlog
.
An SPF record is the first step on the route towards not getting your emails sent to the spam folder. In the world of email, anyone can pretend to be anything. I could right now use the exim instance running on my server to send an email from support@microsoft.com
to (my best friend)@gmail.com
claiming to be Microsoft. The only thing that's stopping me is that the recipient's email service will perform a DNS query to check the microsoft.com's SPF records in order to see who's authorised to send emails on behalf of microsoft.com, thus rejecting my email.
The simplest SPF record is:
v=spf1 a ~all
This record is read left-to-right and says: "If the IP you're getting mail from is in my A record, then it's OK, otherwise, reject the email".
Add that as a TXT record in your domain management panel with Host=@. In the case of Namecheap with e-mail forwarding on, I had to instead edit the TXT record under Mail Settings and add a
between v=spf1
and include:spf.efwd.registrar-servers.com ~all
(the latter says "any IPs that are in the SPF record of efwd.registrar-servers.com" are fine too)
If the SPF record says that the email is indeed from your domain, then the DKIM record verifies that the email wasn't altered in transit. In essence, you publish a public key to another DNS record and then sign your messages with the private key. Strictly speaking, this wasn't required for Gmail to stop classifying my emails to myself as spam, but it's worth doing anyway.
First, generate a key pair:
cd /etc/exim4 && mkdir dkim && cd dkim
openssl genrsa -out dkim-private.pem 1024 -outform PEM
openssl rsa -in dkim-private.pem -out dkim.pem -pubout -outform PEM
Go to your domain control panel and add a TXT record with host (selector)._domainkey
(the selector could be anything: I used a timestamp 20171206) and value
k=rsa; p=(your public key from dkim.pem, all in one line)
Now, set up exim to actually sign outgoing emails with the private key. If you are using the single Exim configuration file option, create (or edit) the file /etc/exim4/exim4.conf.localmacros
and add:
DKIM_CANON = relaxed
DKIM_SELECTOR = (selector, e.g. 20171206)
DKIM_DOMAIN = (domain name, like example.com)
DKIM_PRIVATE_KEY = /etc/exim4/dkim/dkim-private.pem
Reload the config:
/etc/init.d/exim4 reload
And check that exim picked up the settings:
exim -bP transports | grep dkim
dkim_canon = relaxed
dkim_domain = example.com
dkim_private_key = /etc/exim4/dkim/dkim-private.pem
dkim_selector = (selector, e.g. 20171206)
You should wait for both records to propagate around. To test whether this has worked, you can use any of the online services that do some DNS lookups and tell you whether they have seen your records (e.g. https://www.mail-tester.com/spf-dkim-check). You can also send an email to your personal address. If it arrives (or arrives into your spam folder), you can inspect its headers:
Received-SPF: pass (google.com: domain of hello@kimonote.com designates 91.220.127.149 as permitted sender) client-ip=91.220.127.149;
Authentication-Results: mx.google.com;
dkim=pass header.i=@kimonote.com header.s=20171206 header.b=K0nBKUQ7;
spf=pass (google.com: domain of hello@kimonote.com designates 91.220.127.149 as permitted sender) smtp.mailfrom=hello@kimonote.com
Are we done? Not completely. Some ISPs do what's called a reverse DNS lookup on the IP that's sending them emails by contacting the (via several levels of referrals) DNS servers of the organization that IP belongs to in order to find out what domain it corresponds to. Then they compare that to the domain the email is claimed to be from. This is similar to having an SPF record and I thought it wasn't necessary until I had some emails from users saying that they weren't receiving their registration confirmations.
First, check that it's actually the case:
dig -x (your IP address)
If the Answer section doesn't contain your domain name (yes, it is supposed to end with a dot), then you do need to add a PTR record. The PTR record has to be added on your hosting service's side: there usually is a control panel (in my case I had to ask them nicely). This will probably take up to 24 hours to propagate around the world.
Previously on project Betfair, we ran the production market-making CAFBot in Scala, got it working, made some money, lost some money and came back with some new ideas.
Today, we'll test those ideas, look at something that's much more promising and learn a dark truth about timezones.
Sorry this took a bit too long to write, by the way, I've been spending some time working on Kimonote to add email digests to streams. The idea is that given some exporters from streams (e-mail, RSS (already works), Facebook, Twitter etc) with varying frequencies (immediately or weekly/daily digests) as well as some importers (again RSS/Twitter/Facebook/other blogging platforms) a user could create their own custom newsletter and get it sent to their mailbox (push) instead of wasting time checking all those websites (pull), as well as notify their social media circles when they put a new post up anywhere else. None of this works yet, but other features do — if you're interested, sign up for the beta here!
Shameless plug over, back to the world of automated trading goodness.
Remember how in the real-world greyhound market the bot managed to have some of its bets matched despite that they were several ticks away from the best back and lay? I realised I never really tested that in simulation: I started from making the market at the top of the order book and kind of assumed that further away from there matching would never happen. Looks like I was wrong (and in fact in part 5 the bot had issues with its bets that were moved away from the current market because of high exposure getting matched anyway).
So I added (backported?) the levelBase
parameter from the Scala bot into the research one: recall that it specified how far from the best back/lay the bot should start working before applying all the other offsets (exposure or order book imbalance). Hence at levelBase = 0
the bot would work exactly as before and with levelBase = 1
it would start 1 Betfair tick away from the best back/lay. levelBase = 3
is what was traded live on the greyhound market.
The idea behind this is kind of simple: if the bot still gets its bets matched even if it's far away from the best back/lay, it will earn a higher spread with fewer trades.
So, first, I ran it on our favourite horse market with levelBase = 1
.
levelBase = 1
It didn't do well: there were very few matches and so most of the time it just did nothing, trading a grand total of 3 times. This meant that it got locked into an inventory that it couldn't offload.
Let's run it on the whole dataset: though this market didn't work as well, in some other ones matching did happen in jumps larger than 1 tick, so those might be able to offset more liquid markets.
We're tantalizingly close to finally having a PnL of zero (the more observant reader might notice that we could have done the same by not trading at all). Let's see how it would have done on the greyhound markets, which we do know sometimes jump like crazy.
levelBase = 3
Not very inspiring either. There's a large amount of markets where this wouldn't have done anything at all (since the bets were so far from the best back/lay, they don't ever get hit), and when something does happen, it seems to be very rare, so the bot can't liquidate its position and remains at the mercy of the market.
So while that was a cute idea, it didn't seem to work.
At this point, I was running out of ideas. The issue of the bot getting locked into an inventory while the market was trending against it still remained, so I had to look at the larger-scale patterns in the data: perhaps based on the bigger moves in the market, the bot could have a desired inventory it could aim for (instead of always targeting zero inventory).
Consider this: if we think that the market is going to move one way or another, it's okay for the bot to have an exposure that way and it can be gently guided towards it (by means of where it sets its prices). Like that, the bot would kind of become a hybrid of a slower trading strategy with a market maker: even if its large-scale predictions of price movements weren't as good, they would get augmented by the market making component and vice versa.
I tried out a very dumb idea. Remember how in most of the odds charts we looked at the odds, for some reason, trended down? I had kind of noticed that, or at least I thought I did, and wanted to quantify it.
Those odds were always on the favourite (as in, pick the greyhound/horse with the lowest odds 1 hour before the race begins and see how they change). The cause could be that, say, people who wanted to bet on the favourite would delay their decision to see if any unexpected news arrived before the race start, which is the only sort of news that could move the market.
Whatever the unexpected news would be, they would likely affect the favourite negatively: they could be good for any of the other greyhounds/horses, thus making it more likely for them to win the race. Hence it would make sense, if someone wanted to bet on the favourite, for them to wait until just before the race begins to avoid uncertainty, thus pushing the odds down as the race approaches.
So what if we took the other side of this trade? If we were to go long the favourite early, we would benefit from this downwards slide in odds, at the risk of some bad news coming out and us losing money. I guessed this would be similar to a carry trade in finance, where the trader makes money if the market conditions stay the same (say, borrowing money in a lower interest rate currency and then holding it in a higher interest rate currency, hoping the exchange rate doesn't move). In essence, we'd get paid for taking on the risk of unexpected news about the race coming out.
I had first started doing this using my order book simulator, but realised it would be overkill: if the only thing I wanted to do was testing a strategy that traded literally twice (once to enter the position, once to close it), it would be better write a custom scraper from the stream data that would get the odds' timeseries and simulate the strategy faster.
At that point, I realised the horse racing stream data was too large to fit into memory with the new simulator. So I put that on hold for a second and tried my idea on greyhound markets.
This chart plots, at each point in time before the race, the average return on going long (backing) the favourite and then closing our position 15s before the race begins. In any case, the favourite is picked 5 minutes before the race begins. The vertical lines are the error bars (not standard deviations). Essentially, what we have here is a really consistent way to lose money.
This is obviously because of the back-lay spreads: the simulation here assumes we cross the spread both when entering and exiting, in essence taking the best back in the beginning and the best lay at the end.
Remember this chart from part 4?
The average spread 120s before a greyhound race begins is about 5 ticks. We had previously calculated that the loss on selling 1 tick lower than we bought is about 0.5%, so no wonder we're losing about 3% of our investment straight away.
What if we didn't have to cross the spread?
Woah. This graph assumes that instead of backing the runner at the best back, we manage to back it at the current best lay (by placing a passive back at those odds). When we're exiting the position just before the race begins, time is of the essence and so we're okay with paying the spread and placing a lay bet at the odds of the current best lay (getting executed instantly).
The only problem is actually getting matched: since matching in greyhound markets starts very late (as much money gets matched in the last 80 seconds as does before), our bet could just remain in the book forever, or get matched much closer to the beginning of the race.
But here's the fun part: this graph doesn't care. It shows that if the bet is matched at whatever the best lay was 160 seconds before the race, on average this trade makes money — even if the actual match happens a minute later. If the bet doesn't get matched at all, the trade simply doesn't happen.
This does assume that the performance of this strategy is independent of whether or not the bet gets hit at all, but if that's not the case, we would have been able to use the fact that our bet got hit as a canary: when it gets hit, we know that being long this market is a good/bad thing and adjust our position accordingly.
With that reasoning, I went to work changing the internals of Azura to write another core for it and slightly alter the way it ran. The algorithm would be:
I called the new core DAFBot (D stands for Dumb and what AF stands for can be gleaned from the previous posts). I wanted to reuse the idea of polling a strategy for the orders that it wished to offer and the core being stateless, since that would mean that if the bot crashed, I could restart it and it would proceed where it left off. That did mean simple actions like "buy this" became more difficult to encode: the bot basically had to look at its inventory and then say "I want to maintain an outstanding back bet for (how much I want to buy - how much I have)".
Finally, yet another Daedric prince got added to my collection: Sheogorath, "The infamous Prince of Madness, whose motives are unknowable" (I had given up on my naming scheme making sense by this point), would schedule instances of Azura to be run during the trading day by using the Betfair API to fetch a list of greyhound races and executing Azura several minutes before that.
I obviously wasn't ready to get Sheogorath to execute multiple instances of Azura and start losing money at computer speed quite yet, so for now I ran the new strategy manually on some races, first without placing any bets (just printing them out) and then actually doing so.
The biggest issue was the inability to place bets below £2. I had thought this wouldn't be a problem (as I was placing entry bets with larger amounts), but fairly often only part of the offer would get hit, so the bot would end up having an exposure that it wasn't able to close (since closing it would entail placing a lay bet below £2). Hence it took some of that exposure into the race, which wasn't good.
In addition, when testing Sheogorath's scheduling (by getting it to kick off instances of Azura that didn't place bets), I noticed a weird thing: Sheogorath would start Azura one minute later than intended. For example, for a race that kicked off at 3pm, Azura was supposed to be started 5 minutes before that (2:55pm) whereas it was actually executed at 2:56pm.
While investigating this, I realised that there was another issue with my data: I had relied on the stream recorder using the market suspend times that were fetched from Betfair to stop recording, but that might not have been the case: if the greyhound race started before the scheduled suspend time, then the recording would stop abruptly, as opposed to at the official suspend time.
Any backtest that counted backwards from the final point in the stream would kind of have access to forward-looking information: knowing that the end of the data is the actual suspend time, not the advertised one.
Hence I had to recover the suspend times that the recorder saw and use those instead. I still had all of the logs that it used, so I could scrape the times from them. But here was another fun thing: spot-checking some suspend times against Betfair revealed that they sometimes also were 1 minute later than the ones on the website.
That meant the forward-looking information issue was a bigger one, since the recorder would have run for longer and have a bigger chance of being interrupted by a race start. It would also be a problem in horse markets: since those can be traded in-play, there could have been periods of in-play trading in my data that could have affected the market-making bot's backtests (in particular, everyone else in the market is affected by the multisecond bet delay which I wasn't simulating).
But more importantly, why were the suspend times different? Was it an issue on Betfair's side? Was something wrong with my code? It was probably the latter. After meditating on more logfiles, I realised that the suspend times seen by Azura were correct whereas the suspend times for Sheogorath for the same markets were 1 minute off. They were making the same request, albeit at different times (Sheogorath would do it when building up a trading schedule, Azura would do it when one of its instances would get started). The only difference was that the former was written in Python and the latter was written in Scala.
After some time of going through my code with a debugger and perusing documentation, I learned a fun fact about timezones.
I used this bit of code to make sure all times the bot was handing were in UTC:
def parse_dt(dt_str, tz=None):
return dt.strptime(dt_str, '%Y-%m-%dT%H:%M:%S.%fZ').replace(tzinfo=pytz.timezone(tz) if tz else None)
m_end_dt = parse_dt(m['marketStartTime'], m['event']['timezone'])
m_end_dt = m_end_dt.astimezone(pytz.utc).replace(tzinfo=None)
However, timezones change. Since pytz.timezone
doesn't know the time of the timezone its argument refers to, it looks at the earliest definition of the timezone, which in the case of Europe/London
is back in mid-1800s. Was the timezone offset back then something reasonable, like an hour? Nope, it was 1 minute.
Here's a fun snippet of code so you can try this at home:
In[4]:
from datetime import datetime as dt
import pytz
def parse_dt(dt_str, tz=None):
return dt.strptime(dt_str, '%Y-%m-%dT%H:%M:%S.%fZ').replace(tzinfo=pytz.timezone(tz) if tz else None)
wtf = '2017-09-27T11:04:00.000Z'
parse_dt(wtf)
Out[4]: datetime.datetime(2017, 9, 27, 11, 4)
In[5]: parse_dt(wtf, 'Europe/London')
Out[5]: datetime.datetime(2017, 9, 27, 11, 4, tzinfo=[DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD])
parse_dt(wtf, 'Europe/London').astimezone(pytz.utc)
Out[6]: datetime.datetime(2017, 9, 27, 11, 5, tzinfo=[UTC])
And here's an answer from the django-users mailing group on the right way to use timezones:
The right way to attach a pytz timezone to a naive Python datetime is to call
tzobj.localize(dt)
. This gives pytz a chance to say "oh, your datetime is in 2015, so I'll use the offset for Europe/London that's in use in 2015, rather than the one that was in use in the mid-1800s"
Finally, here's some background on how this offset was calculated.
Luckily, I knew which exact days in my data were affected by this bug and was able to recover the right suspend times. In fact, I've been lying to you this whole time and all of the plots in this blog series were produced after I had finished the project, with the full and the correct dataset. So the results, actually, weren't affected that much and now I have some more confidence in them.
Next time on project Betfair, we'll teach DAFBot to place orders below £2 and get it to do some real live trading, then moving on to horses.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here or the beta here and/or follow it on Twitter as well at twitter.com/kimonote!
Previously on project Betfair, we spent some time tuning our market making bot and trying to make it make money in simulation. Today, we'll use coding and algorithms to make (and lose) us some money in real life.
Scala is amazing. At the very least, it can be written like a better Java (with neat features like allowing multiple classes in one file) and then that can evolve into never having to worry about NullPointerExceptions and the ability to write blog posts about monads.
And strictly typed languages are fun! There's something very empowering about being able to easily refactor code and rename, extract, push and pull methods around, whilst being mostly confident that the compiler is going to stop you from doing stupid things. Unlike, say, Python, where you have to write (and maintain) a massive test suite to make sure every line of your code gets executed, here proper typing can replace a whole class of unit tests.
Hence (and because I kind of wanted to relearn Scala) I decided to write the live trading version of my Betfair market-making bot I called Azura in Scala.
This will mostly be a series of random tales, code snippets, war stories and postmortems describing its development.
The actual design of the bot (common between the research Python version and the production Scala version) is described here.
In terms of libraries, I didn't have to use many: the biggest one was probably the JSON parser from the Play! framework. I used SBT (Simple Build Tool ("never have I seen something so misnamed" — coworker, though I didn't have problems with it)) to manage dependencies and build the bot.
I didn't really have a clever deploy procedure: I would log onto the "production" box, pull the code and use
sbt assembly
to create a so-called uber-jar, a Java archive that has all of the project's dependencies packaged inside of it. So executing it with the Java Virtual Machine would be simply a matter of
java -Dazura.dry_run=false -jar target/scala-2.12/azura-assembly-0.1.jar arguments
Disadvantages: this takes up space and possibly duplicates libraries that already exist on the target machine. Advantages: we don't depend on the target machine having the right library versions or the Java classpath set up correctly: the machine only needs to have the right version of the JVM.
object MarketData {
def parseMarketChangeMessage(message: JsValue): JsResult[MarketChangeMessage] = message.validate[MarketChangeMessage](marketChangeMessageReads)
case class RunnerChange(runnerId: Int, backUpdates: Map[Odds, Int],
layUpdates: Map[Odds, Int],
tradedUpdates: Map[Odds, Int])
case class MarketChange(marketId: String, runnerChanges: Seq[RunnerChange], isImage: Boolean)
implicit val oddsReads: Reads[Odds] = JsPath.read[Double].map(Odds.apply)
implicit val orderBookLineReads: Reads[Map[Odds, Int]] = JsPath.read[Seq[(Odds, Double)]].map(
_.map { case (p, v) => (p, (v * 100D).toInt) }.toMap)
implicit val runnerChangeReads: Reads[RunnerChange] = (
(JsPath \ "id").read[Int] and
(JsPath \ "atb").readNullable[Map[Odds, Int]].map(_.getOrElse(Map.empty)) and
(JsPath \ "atl").readNullable[Map[Odds, Int]].map(_.getOrElse(Map.empty)) and
(JsPath \ "trd").readNullable[Map[Odds, Int]].map(_.getOrElse(Map.empty))) (RunnerChange.apply _)
implicit val marketChangeReads: Reads[MarketChange] = (
(JsPath \ "id").read[String] and
(JsPath \ "rc").read[Seq[RunnerChange]] and
(JsPath \ "img").readNullable[Boolean].map(_.getOrElse(false))) (MarketChange.apply _)
implicit val marketChangeMessageReads: Reads[MarketChangeMessage] = (
(JsPath \ "pt").read[Long].map(new Timestamp(_)) and
(JsPath \ "mc").readNullable[Seq[MarketChange]].map(_.getOrElse(Seq.empty))) (MarketChangeMessage.apply _)
case class MarketChangeMessage(timestamp: Timestamp, marketChanges: Seq[MarketChange])
}
This might take some time to decipher. The problem was converting a JSON-formatted order book message into a data structure used inside the bot to represent the order book — and there, of course, can be lots of edge cases. What if the message isn't valid JSON? What if the structure of the JSON isn't exactly what we expected it to be (e.g. a subentry is missing)? What if the type of something isn't what we expected it to be (instead of a UNIX timestamp we get an ISO-formatted datetime)?
The Play! JSON module kind of helps us solve this by providing a DSL that allows to specify the expected structure of the JSON object by combining smaller building blocks. For example, marketChangeReads
shows how to parse a message containing changes to the whole market order book (MarketChangeMessage
). We first need to read a string containing the market ID at "id"
, then a sequence of changes to each runner (RunnerChange
) located at "rc"
and then a Boolean value at "img"
that says whether it's a full image (as in we need to flush our cache and replace it with this message) or not.
To read a RunnerChange
(runnerChangeReads
), we need to read an integer containing the runner's ID and the changes to its available to back, lay and traded odds. To read those changes (orderBookLineReads
), we want to parse a sequence of tuples of odds and doubles, convert the doubles (represending pound amounts available to bet or traded) into integer penny values and finally turn that into a map.
Finally, parsing the odds is simply a matter of creating an Odds
object from the double value representing the odds (which rounds the odds to the neearest ones allowed by Betfair).
override def getTargetMarket(orderBook: OrderBook, runnerOrders: RunnerOrders): (Map[Odds, Int], Map[Odds, Int]) = {
val logger: Logger = Logger[CAFStrategyV2]
val exposure = OrderBookUtils.getPlayerExposure(runnerOrders).net
val exposureFraction = exposure / offeredExposure.toDouble
logger.info(s"Net exposure $exposure, fraction of offer: $exposureFraction")
if (orderBook.availableToBack.isEmpty || orderBook.availableToLay.isEmpty) {
logger.warn("One half of the book is empty, doing nothing")
return (Map.empty, Map.empty)
}
logger.info(s"Best back: ${orderBook.bestBack()}, Best lay: ${orderBook.bestLay()}")
val imbalance = (orderBook.bestBack()._2 - orderBook.bestLay()._2) / (orderBook.bestBack()._2 + orderBook.bestLay()._2).toDouble
logger.info(s"Order book imbalance: $imbalance")
// Offer a back -- i.e. us laying
val backs: Map[Odds, Int] = if (exposureFraction >= -2) {
val start = Odds.toTick(orderBook.bestBack()._1) +
(if (exposureFraction >= -exposureThreshold && imbalance >= -imbalanceThreshold) 0 else -1) - levelBase
(start until (start - levels) by -1).map { p: Int => Odds.fromTick(p) -> (offeredExposure / Odds.fromTick(p).odds).toInt }.toMap
} else Map.empty
val lays: Map[Odds, Int] = if (exposureFraction <= 2) {
val start = Odds.toTick(orderBook.bestLay()._1) +
(if (exposureFraction <= exposureThreshold && imbalance <= imbalanceThreshold) 0 else 1) + levelBase
(start until start + levels).map { p: Int => Odds.fromTick(p) -> (offeredExposure / Odds.fromTick(p).odds).toInt }.toMap
} else Map.empty
(backs, lays)
}
The core is actually fairly simple and similar to the research version of CAFBotV2. Here, exposure really means inventory, as in amount of one-pound contracts the bot is holding. Instead of absolute inventory values for its limits (like 30 contracts in the previous post), the bot operates with fractions of amount of contracts it offers over its position (say, the previous limit of 30 contracts would have been represented here as 30/10 = 3).
After calculating the fraction and the order book imbalance, the bot calculates the back and lay bets it wishes to maintain in the book: first, the tick number it wishes to start on (the best back/lay or one level below/above if the order book imbalance or inventory is too negative/positive) and then each odds level counting down/up from that point. Finally, it divides the amount of contracts it wishes to offer by the actual odds in order to output the bet (in pounds) it wishes the be maintained at each price level.
There's also a custom levelBase
parameter that allows to control how far from the best back/lay the market is made: with a levelBase
of zero, the bot would place its bets at the best back/lay, with levelBase
of one the bets would be 1 tick below/above best back/lay etc.
I sadly do not any more have the old logs back from when I managed to receive (and parse!) the first few messages from the Stream API, so here's a dramatic reconstruction.
20:46:58.093 [main] INFO main - Starting Azura...
20:46:58.096 [main] INFO main - Getting market suspend time...
20:46:58.648 [main] INFO main - {"filter":{"marketIds":["1.134156568"]},"marketProjection":["MARKET_START_TIME"],"maxResults":1}
20:46:59.793 [main] INFO main - [{"marketId":"1.134156568","marketName":"R9 6f Claim","marketStartTime":"2017-09-13T21:15:00.000Z","totalMatched":0.0}]
20:46:59.966 [main] INFO main - 2017-09-13T21:15
20:46:59.966 [main] INFO main - Initializing the subscription...
20:47:00.056 [main] INFO streaming.SubscriptionManager - {"op":"connection","connectionId":"100-130917204700-67655"}
20:47:00.077 [main] INFO streaming.SubscriptionManager - {"op":"status","id":1,"statusCode":"SUCCESS","connectionClosed":false}
20:47:00.081 [main] INFO main - Subscribing to the streams...
20:47:00.119 [main] INFO streaming.SubscriptionManager - {
"op" : "status",
"id" : 1,
"statusCode" : "SUCCESS",
"connectionClosed" : false
}
20:47:00.160 [main] INFO streaming.SubscriptionManager - {
"op" : "status",
"id" : 2,
"statusCode" : "SUCCESS",
"connectionClosed" : false
}
20:47:00.207 [main] INFO streaming.SubscriptionManager - {
"op" : "mcm",
"id" : 1,
"initialClk" : "yxmUlZaPE8sZsOergBPLGYz/ipIT",
"clk" : "AAAAAAAA",
"conflateMs" : 0,
"heartbeatMs" : 5000,
"pt" : 1505335620115,
"ct" : "SUB_IMAGE",
"mc" : [ {
"id" : "1.134156568",
"rc" : [ {
"atb" : [ [ 5.1, 2.5 ], [ 1.43, 250 ], [ 1.01, 728.58 ], [ 1.03, 460 ], [ 5, 20 ], [ 1.02, 500 ], [ 1.13, 400 ] ],
"atl" : [ [ 24, 0.74 ], [ 25, 3.55 ], [ 26, 1.96 ], [ 29, 4.2 ], [ 900, 0.02 ] ],
"trd" : [ [ 5.3, 3.96 ], [ 5.1, 6.03 ] ],
"id" : 13531617
}, ... ],
"img" : true
} ]
}
20:47:00.322 [main] INFO main - Initializing harness for runner 13531618
20:47:00.326 [main] INFO streaming.SubscriptionManager - {
"op" : "ocm",
"id" : 2,
"initialClk" : "VeLnh9sEsAa1t/neBHeqxJ3aBG7YuYDbBNYGuqHD1AQ=",
"clk" : "AAAAAAAAAAAAAA==",
"conflateMs" : 0,
"heartbeatMs" : 5000,
"pt" : 1505335620130,
"ct" : "SUB_IMAGE"
}
20:47:00.341 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 0, Lay 0, cash 0
20:47:00.342 [main] INFO strategy.StrategyHarness - Net exposure: 0
20:47:00.343 [main] INFO strategy.CAFStrategyV2 - Net exposure 0, fraction of offer: 0.0
20:47:00.343 [main] INFO strategy.CAFStrategyV2 - Best back: (Odds(3.45),198), Best lay: (Odds(6.6),472)
20:47:00.344 [main] INFO strategy.CAFStrategyV2 - Order book imbalance: -0.408955223880597
20:47:00.351 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(3.3) -> 454, Odds(3.25) -> 461),Map(Odds(7.2) -> 208, Odds(7.4) -> 202)), current outstanding orders are back, lay (Map(),Map())
20:47:00.952 [main] INFO streaming.SubscriptionManager - {
"op" : "mcm",
"id" : 1,
"clk" : "ADQAFQAI",
"pt" : 1505335620944,
"mc" : [ {
"id" : "1.134156568",
"rc" : [ {
"atl" : [ [ 990, 0 ] ],
"id" : 13531618
} ]
} ]
}
20:47:00.955 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 0, Lay 0, cash 0
20:47:00.955 [main] INFO strategy.StrategyHarness - Net exposure: 0
20:47:00.955 [main] INFO strategy.CAFStrategyV2 - Net exposure 0, fraction of offer: 0.0
20:47:00.955 [main] INFO strategy.CAFStrategyV2 - Best back: (Odds(3.45),198), Best lay: (Odds(6.6),472)
20:47:00.956 [main] INFO strategy.CAFStrategyV2 - Order book imbalance: -0.408955223880597
20:47:00.956 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(3.3) -> 454, Odds(3.25) -> 461),Map(Odds(7.2) -> 208, Odds(7.4) -> 202)),...
Essentially, at the start the bot subscribes to the data stream for a given market and its own order status stream. At 20:47:00.207 it receives the first message from the market data stream: the initial order book image for all runners in that market.
Before subscribing to the streams, though, the bot also gets the market metadata to find out when the race actually begins. If it receives a message and it's timestamped less than 5 minutes away from the race start, it starts market making (well, pretending to, since at that point I hadn't implemented bet placement) on the favourite at that time by polling the strategy core and just returning the bets it wants to maintain on both sides of the order book.
Every time there's an update on the market book stream, the strategy is polled again to make sure it still wants the same bets to be maintained. If there's a change, then the harness performs cancellations/placements as needed.
The first problem I noticed when trying to implement bet placement was that the order book, order status streams and the API-NG aren't synchronized. In essence, if a bet is placed via API-NG, it takes a while for it to appear on the order status stream (showing whether or not it has been executed) or be reflected in the market order book. Since the core of the bot is supposed to be stateless, this could have caused this sort of an issue:
This could get ugly quickly and there were several ways to solve it. One could be applying the bet that had just been submitted to a local copy of the order status stream and then ignoring the real Betfair message containing that bet when it does appear on the stream, but that would mean needing to reconcile our local cache with the Betfair stream: what if the bet doesn't appear or turns out to have been invalidated? I went with a simpler approach: get the bet ID returned by the REST API when a bet is placed and then stop polling the bot and placing bets until a bet with that ID is seen on the order status stream, since only then the bot would be in a consistent state.
The resultant timings are not HFT-like at all. From looking at the logs, a normal sequence is something like:
This could be partially mitigated by doing order submissions in a separate thread (in essence applying the order to our outstanding orders cache even before we receive a message from Betfair with its bet ID) but there would still be the issue of Betfair apparently taking about 190ms to put the bet into its order book. And I didn't want to bother for now, since I just wanted to get the bot into a shape where it could place live bets.
I was now ready to unleash the bot onto some real markets. I chose greyhound racing and, for safety, decided to do a test with making a market several ticks away from the current best lay/back, the reasoning being that while it would test the whole system end-to-end (in terms of maintaining outstanding orders), these bets would have little chance of getting matched and even if they did, it would be far enough from the current market for me to have time to quickly liquidate them manually and not lose money.
Well, before that it had some very embarrassing false starts. Remember how I operated with penny amounts throughout the codebase to avoid floating point rounding errors? I completely forgot to change pennies to pounds again when submitting the bets, which meant that instead of a £2 bet I tried to submit a £200 bet. Luckily, I didn't have that much in my account so I just got lots of INSUFFICIENT_FUNDS
errors.
Speaking of funding the account, Betfair doesn't do margin calls and requires all bets to be fully funded: so say for a lay of £2 at 1.5 you need to have at least (1.5 * £2) - £2 = £1 and for a back of £2 at 1.5 you need to have £2. Backs and lays can be offset against each other: so if we've backed a runner for £2 and now have £0 available to bet, we can still "green up" by placing a lay (as long as it doesn't make us short the runner).
For unmatched bets, it gets slightly more weird: if there's both an unmatched back and a lay in the order book, the amount available to bet is reduced by the maximum of the liabilities of the two: since either one of them can be matched, Betfair takes the worst-case scenario.
So I started the bot in actual live mode, with real money (it would place 15 contracts on both sides, resulting in bets of about £5 at odds 3.0) and, well, the placement actually worked! On the Betfair website I saw several unmatched bets attributed to me which would change as the market moved.
20:47:01.059 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 0, Lay 0, cash 0
20:47:01.059 [main] INFO strategy.StrategyHarness - Net exposure: 0
20:47:01.059 [main] INFO strategy.CAFStrategyV2 - Net exposure 0, fraction of offer: 0.0
20:47:01.060 [main] INFO strategy.CAFStrategyV2 - Best back: (Odds(3.45),198), Best lay: (Odds(6.4),170)
20:47:01.060 [main] INFO strategy.CAFStrategyV2 - Order book imbalance: 0.07608695652173914
20:47:01.061 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(3.3) -> 454, Odds(3.25) -> 461),Map(Odds(7.0) -> 214, Odds(7.2) -> 208)), current outstanding orders are back, lay (Map(Odds(7.4) -> 202, Odds(7.2) -> 208),Map(Odds(3.25) -> 461, Odds(3.3) -> 454))
20:47:01.068 [main] INFO execution.ExecutionUtils - Trying to submit 1 placements and 1 cancellations
20:47:01.069 [main] INFO execution.ExecutionUtils - Available to bet according to us: 97919
20:47:01.073 [main] INFO execution.ExecutionUtils - Submitting cancellations: {"marketId":"1.134156568","instructions":[{"betId":102464716465,"sizeReduction":null}]}
20:47:01.191 [main] INFO execution.ExecutionUtils - Result: 200
20:47:01.192 [main] INFO execution.ExecutionUtils - {
"status" : "SUCCESS",
"marketId" : "1.134156568",
"instructionReports" : [ {
"status" : "SUCCESS",
"instruction" : {
"betId" : "102464716465"
},
"sizeCancelled" : 2.02,
"cancelledDate" : "2017-09-13T20:47:01.000Z"
} ]
}
20:47:01.192 [main] INFO execution.ExecutionUtils - Submitting placements: {"marketId":"1.134156568","instructions":[{"orderType":"LIMIT","selectionId":13531618,"side":"BACK","limitOrder":{"size":2.14,"price":7,"persistenceType":"LAPSE"}}]}
20:47:01.308 [main] INFO execution.ExecutionUtils - Result: 200
20:47:01.309 [main] INFO execution.ExecutionUtils - {
"status" : "SUCCESS",
"marketId" : "1.134156568",
"instructionReports" : [ {
"status" : "SUCCESS",
"instruction" : {
"selectionId" : 13531618,
"limitOrder" : {
"size" : 2.14,
"price" : 7,
"persistenceType" : "LAPSE"
},
"orderType" : "LIMIT",
"side" : "BACK"
},
"betId" : "102464717112",
"placedDate" : "2017-09-13T20:47:01.000Z",
"averagePriceMatched" : 0,
"sizeMatched" : 0,
"orderStatus" : "EXECUTABLE"
} ]
}
20:47:01.310 [main] INFO streaming.SubscriptionManager - {
"op" : "mcm",
"id" : 1,
"clk" : "AEUAIgAR",
"pt" : 1505335621264,
"mc" : [ {
"id" : "1.134156568",
"rc" : [ {
"atl" : [ [ 7.4, 0 ] ],
"id" : 13531618
} ]
} ]
}
20:47:01.311 [main] INFO main - Some bet IDs unseen by the order status cache, not doing anything...
20:47:01.313 [main] INFO streaming.SubscriptionManager - {
"op" : "ocm",
"id" : 2,
"clk" : "ACEAFgAJABgACw==",
"pt" : 1505335621269,
"oc" : [ {
"id" : "1.134156568",
"orc" : [ {
"id" : 13531618,
"uo" : [ {
"id" : "102464716465",
"p" : 7.4,
"s" : 2.02,
"side" : "B",
"status" : "EC",
"pt" : "L",
"ot" : "L",
"pd" : 1505335620000,
"sm" : 0,
"sr" : 0,
"sl" : 0,
"sc" : 2.02,
"sv" : 0,
"rac" : "",
"rc" : "REG_GGC",
"rfo" : "",
"rfs" : ""
} ]
} ]
} ]
}
20:47:01.314 [main] INFO main - Some bet IDs unseen by the order status cache, not doing anything...
20:47:01.376 [main] INFO streaming.SubscriptionManager - {
"op" : "mcm",
"id" : 1,
"clk" : "AEsAIwAR",
"pt" : 1505335621369,
"mc" : [ {
"id" : "1.134156568",
"rc" : [ {
"atl" : [ [ 7, 2.14 ] ],
"id" : 13531618
} ]
} ]
}
20:47:01.378 [main] INFO main - Some bet IDs unseen by the order status cache, not doing anything...
20:47:01.380 [main] INFO streaming.SubscriptionManager - {
"op" : "ocm",
"id" : 2,
"clk" : "ACMAFwALABoADA==",
"pt" : 1505335621372,
"oc" : [ {
"id" : "1.134156568",
"orc" : [ {
"id" : 13531618,
"uo" : [ {
"id" : "102464717112",
"p" : 7,
"s" : 2.14,
"side" : "B",
"status" : "E",
"pt" : "L",
"ot" : "L",
"pd" : 1505335621000,
"sm" : 0,
"sr" : 2.14,
"sl" : 0,
"sc" : 0,
"sv" : 0,
"rac" : "",
"rc" : "REG_GGC",
"rfo" : "",
"rfs" : ""
} ]
} ]
} ]
}
20:47:01.382 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 0, Lay 0, cash 0
20:47:01.382 [main] INFO strategy.StrategyHarness - Net exposure: 0
20:47:01.382 [main] INFO strategy.CAFStrategyV2 - Net exposure 0, fraction of offer: 0.0
So as intended:
And indeed, the bot managed to maintain its bets far enough from the action in order to not get matched.
I ran it again, on a different market:
20:54:19.217 [main] INFO strategy.CAFStrategyV2 - Best back: (Odds(3.15),3474), Best lay: (Odds(3.2),18713)
20:54:19.217 [main] INFO strategy.CAFStrategyV2 - Order book imbalance: -0.6868436471807815
20:54:19.218 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(2.98) -> 503, Odds(2.96) -> 506),Map(Odds(3.35) -> 447, Odds(3.4) -> 441)), current outstanding orders are back, lay (Map(Odds(3.6) -> 416, Odds(3.65) -> 409),Map(Odds(3.25) -> 461, Odds(3.2) -> 468))
20:54:19.219 [main] INFO execution.ExecutionUtils - Trying to submit 4 placements and 4 cancellations
20:54:19.219 [main] INFO execution.ExecutionUtils - Available to bet according to us: 97934
20:54:19.220 [main] INFO execution.ExecutionUtils - Submitting cancellations: {"marketId":"1.134151928","instructions":[{"betId":102464926664,"sizeReduction":null},{"betId":102464926665,"sizeReduction":null},{"betId":102464926548,"sizeReduction":null},{"betId":102464926666,"sizeReduction":null}]}
20:54:19.324 [main] INFO execution.ExecutionUtils - Result: 200
20:54:19.324 [main] INFO execution.ExecutionUtils - {
"status" : "PROCESSED_WITH_ERRORS",
"errorCode" : "PROCESSED_WITH_ERRORS",
"marketId" : "1.134151928",
"instructionReports" : [ {
"status" : "SUCCESS",
"instruction" : {
"betId" : "102464926664"
},
"sizeCancelled" : 4.16,
"cancelledDate" : "2017-09-13T20:54:19.000Z"
}, {
"status" : "SUCCESS",
"instruction" : {
"betId" : "102464926665"
},
"sizeCancelled" : 4.1,
"cancelledDate" : "2017-09-13T20:54:19.000Z"
}, {
"status" : "FAILURE",
"errorCode" : "BET_TAKEN_OR_LAPSED",
"instruction" : {
"betId" : "102464926548"
}
}, {
"status" : "FAILURE",
"errorCode" : "BET_TAKEN_OR_LAPSED",
"instruction" : {
"betId" : "102464926666"
}
} ]
}
20:54:19.325 [main] ERROR execution.ExecutionUtils - Cancellation unsuccessful. Aborting.
20:54:19.325 [main] ERROR main - Execution failure.
Oh boy. I quickly alt-tabbed to the Betfair web interface, and yep, I had an outstanding bet that had been matched and wasn't closed. I managed to close my position manually (by submitting some offsetting bets on the other side) and in fact managed to make about £1 from the trade, but what on earth happened here?
Looking at the logs, it seems like the bot wanted to move its bets once again: the market suddenly dropped from best back/lay 3.4/3.45 down to 3.15/3.2, whereas the bot had bets at 3.2, 3.25/3.6, 3.65. So the bot needed to cancel all 4 of its outstanding bets and move them down as well.
But wait: how come the best back was at 3.15 and the bot had a lay bet (meaning that bet was available to back) at 3.2? Why wasn't an offer to back at higher odds (3.2 vs 3.15) at the top of the book instead?
In fact, those two bets had been matched and that hadn't yet been reflected on the order status feed. So when the bot tried to cancel all of its bets, two cancellations failed because the bets had already been matched. The odds soon recovered back and I managed to submit a back bet at higher odds than the bot had laid (which ended up in a profit), but the order status and the order book stream being desynchronized was a bigger problem.
I decided to just ignore errors where cancellations were failing: eventually the bot would receive an update saying that the bet got matched and would stop trying to cancel it.
How did we get here?
10:08:52.623 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 4599, Lay 0, cash -1501
10:08:52.623 [main] INFO strategy.StrategyHarness - Net exposure: 4599
10:08:52.623 [main] INFO strategy.CAFStrategyV2 - Net exposure 4599, fraction of offer: 3.066
10:08:52.623 [main] INFO strategy.CAFStrategyV2 - Best back: (Odds(2.74),1000), Best lay: (Odds(2.8),1021)
10:08:52.623 [main] INFO strategy.CAFStrategyV2 - Order book imbalance: -0.010390895596239486
10:08:52.624 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(2.68) -> 559, Odds(2.66) -> 563),Map(Odds(2.88) -> 520, Odds(2.9) -> 517)), current outstanding orders are back, lay (Map(),Map(Odds(2.66) -> 563, Odds(2.68) -> 559))
10:08:52.624 [main] INFO execution.ExecutionUtils - Trying to submit 2 placements and 0 cancellations
10:08:52.624 [main] INFO execution.ExecutionUtils - Available to bet according to us: 96626
10:08:52.624 [main] INFO execution.ExecutionUtils - Submitting placements: {"marketId":"1.134190324","instructions":[{"orderType":"LIMIT","selectionId":12743977,"side":"BACK","limitOrder":{"size":5.2,"price":2.88,"persistenceType":"LAPSE"}},{"orderType":"LIMIT","selectionId":12743977,"side":"BACK","limitOrder":{"size":5.17,"price":2.9,"persistenceType":"LAPSE"}}]}
10:08:52.769 [main] INFO execution.ExecutionUtils - Result: 200
10:08:52.769 [main] INFO execution.ExecutionUtils - {
"status" : "SUCCESS",
"marketId" : "1.134190324",
"instructionReports" : [ {
"status" : "SUCCESS",
"instruction" : {
"selectionId" : 12743977,
"limitOrder" : {
"size" : 5.2,
"price" : 2.88,
"persistenceType" : "LAPSE"
},
"orderType" : "LIMIT",
"side" : "BACK"
},
"betId" : "102489084350",
"placedDate" : "2017-09-14T10:08:52.000Z",
"averagePriceMatched" : 2.9,
"sizeMatched" : 5.2,
"orderStatus" : "EXECUTION_COMPLETE"
}, {
"status" : "SUCCESS",
"instruction" : {
"selectionId" : 12743977,
"limitOrder" : {
"size" : 5.17,
"price" : 2.9,
"persistenceType" : "LAPSE"
},
"orderType" : "LIMIT",
"side" : "BACK"
},
"betId" : "102489084351",
"placedDate" : "2017-09-14T10:08:52.000Z",
"averagePriceMatched" : 2.9,
"sizeMatched" : 5.17,
"orderStatus" : "EXECUTION_COMPLETE"
} ]
}
...
10:08:52.789 [main] INFO strategy.StrategyHarness - Exposures (GBX): Back 7606, Lay 0, cash -2538
10:08:52.789 [main] INFO strategy.StrategyHarness - Net exposure: 7606
10:08:52.789 [main] INFO strategy.StrategyHarness - Max exposure reached, liquidating
10:08:52.789 [main] INFO strategy.StrategyHarness - Strategy wants atb, atl (Map(Odds(2.96) -> 7606),Map()), current outstanding orders are back, lay (Map(),Map(Odds(2.66) -> 563, Odds(2.68) -> 559))
10:08:52.790 [main] INFO execution.ExecutionUtils - Trying to submit 1 placements and 2 cancellations
10:08:52.790 [main] INFO execution.ExecutionUtils - Available to bet according to us: 95589
10:08:52.790 [main] INFO execution.ExecutionUtils - Submitting cancellations: {"marketId":"1.134190324","instructions":[{"betId":102489083069,"sizeReduction":null},{"betId":102489083236,"sizeReduction":null}]}
10:08:52.873 [main] INFO execution.ExecutionUtils - Result: 200
10:08:52.873 [main] INFO execution.ExecutionUtils - {
"status" : "SUCCESS",
"marketId" : "1.134190324",
"instructionReports" : [ {
"status" : "SUCCESS",
"instruction" : {
"betId" : "102489083069"
},
"sizeCancelled" : 5.63,
"cancelledDate" : "2017-09-14T10:08:52.000Z"
}, {
"status" : "SUCCESS",
"instruction" : {
"betId" : "102489083236"
},
"sizeCancelled" : 5.59,
"cancelledDate" : "2017-09-14T10:08:52.000Z"
} ]
}
10:08:52.873 [main] INFO execution.ExecutionUtils - Submitting placements: {"marketId":"1.134190324","instructions":[{"orderType":"LIMIT","selectionId":12743977,"side":"LAY","limitOrder":{"size":76.06,"price":2.96,"persistenceType":"LAPSE"}}]}
10:08:52.895 [main] INFO execution.ExecutionUtils - Result: 200
10:08:52.895 [main] INFO execution.ExecutionUtils - {
"status" : "FAILURE",
"errorCode" : "INSUFFICIENT_FUNDS",
"marketId" : "1.134190324",
"instructionReports" : [ {
"status" : "FAILURE",
"errorCode" : "ERROR_IN_ORDER",
"instruction" : {
"selectionId" : 12743977,
"limitOrder" : {
"size" : 76.06,
"price" : 2.96,
"persistenceType" : "LAPSE"
},
"orderType" : "LIMIT",
"side" : "LAY"
}
} ]
}
10:08:52.895 [main] ERROR execution.ExecutionUtils - Placement unsuccessful. Aborting.
This led to another scramble with me frantically trying to close the position. In the end, I managed to make about £6 of profit from that market (accidentally going short just before it kicked off with that runner losing in the end).
As you'll find out later, this will be the most money that project Betfair will make.
So what happened here? First, the bot's back bets were being matched disproportionately often: at 10:08:52.623 it held 45.99 contracts. It was still below its maximum exposure level, so it placed a couple more back bets far away from the market at 10:08:52.769. Those immediately got matched (see "orderStatus" : "EXECUTION_COMPLETE"
in the REST response), bringing the bot's exposure to above 75 contracts, so at 10:08:52.789 it decided to completely liquidate its position.
What happened next was dumb: instead of placing a lay bet of £76.06 / 2.96 (number of contracts divided by the odds), it placed a bet of £76.06. This would have made the bot have a massive short on the runner, but it didn't have enough money to do so, so instead Betfair came back with an INSUFFICIENT_FUNDS
error.
Interestingly enough, if the bot did manage to close its position, it would have made money (but less): the Back 7606, Lay 0, cash -2538
part basically says it had paid £25.38 for 76.06 contracts, implying average odds of 3.00, so with a lower lay of 2.96 it would get £25.69 back, leaving it with a profit of £0.31.
After fixing those bugs I encountered a few more: for example, since the bot couldn't place orders below £2, it would sometimes end up with a small residual exposure at the end of trading which it couldn't get rid of. There were other issues, for example, the bot wasn't incorporating instant matches (as in when the bet placement REST request came back saying the bet had gotten immediately matched). After a couple more runs I managed to accidentally lose most of the £7 I had accidentally made and decided to stop there for now.
What I was also interested in was the fact that despite my assumptions, offers as far as 3 Betfair ticks away from the market would get matched. This was partially because the bot was slower than when I simulated it and partially because matching in greyhound markets indeed happened in jumps, but that gave me a different idea: what if I did actually simulate making a market further away from the best bid/offer?
And later on, I would come across a different and simpler idea that would mean market making would get put on hold.
Next time on project Betfair, we'll tinker with our simulation some more and then start looking at our data from a different perspective.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
Previously on project Betfair, we started fixing our market-making bot so that it wouldn't accumulate as much inventory. Today, we'll try to battle the other foe of a market maker: adverse selection.
Consider the microprice from part 3: the average of the best back price and the best lay price, weighted by the volume on both sides. We had noticed that sometimes a move in the best back/lay can be anticipated by the microprice getting close to one or the other.
Let's quantify this somehow. Let's take the order book imbalance indicator, showing how close this microprice is to either the best back or the best lay: $$\frac{\text{BBVolume} - \text{BLVolume}}{\text{BBVolume} + \text{BLVolume}}$$ Can this predict price movements?
Oh yes it can. This graph plots the average move in the best back/lay quotes at the next tick (as in the next Betfair message on the stream), conditioned on the cumulative order book imbalance. In other words, the blue circles/crosses show the average move in the best lay/back quote, assuming the order book imbalance is above the given value, and the red markers show the same, but for order book imbalances below the given value.
For example, at order book imbalance values above 0.5 the average move in the best back/lay quotes in the next message is about 0.1 Betfair tick (this time I mean a minimum price move, like 1.72 to 1.73) and for order book imbalance values below -0.5 the average move in the best back/lay quotes is about -0.1 Betfair tick.
Essentially, at high negative or positive order book imbalance values we can say that there will be an imminent price move. There are several intuitive explanations to this phenomenon. For example, we can say that if aggressive trades happen randomly against both sides, the side with less volume offered will get exhausted earlier, hence the price will naturally move towards that side. In addition, if offers on a given side represent an intention to back (or lay), the participants on that side might soon get impatient and will cross the spread in order to get executed faster, the side with more available volume thus winning and pushing the price away from itself.
This effect is quite well documented in equity and futures markets and is often used for better execution: while it's not able to predict large-scale moves, it can help an executing algorithm decide when to cross the spread once we've decided what position we wish to take. For example, see this presentation from BAML — and on page 6 it even has a very similar plot to this one!
This is a very useful observation, since that means the market maker can anticipate prices moving and adjust its behaviour accordingly. Let's assume that we're making a market at 1.95 best back / 1.96 best lay and the odds will soon move to 1.94 best back / 1.95 best lay. We can prepare for this by cancelling our lay (that shows in the book as being available to back) at 1.95 and moving it to 1.94: otherwise it would have been executed at 1.95 shortly before the price move and we would have immediately lost money.
So I added another feature to CAFBot: when the order book imbalance is above a given threshold (positive or negative), it would move that side of the market it was making by one tick. So, for example, let's again say that the current best back/lay are 1.94/1.95 and the order book imbalance threshold is set to be 0.5. Then:
For now, I didn't use any of the inventory management methods I had described in the previous part (though there are some interesting ways they can interact with this: for example, could we ever not move our quotes at high imbalance values because an imminent price move and hence trades against us could help us close our position?). I did, however, keep the make-market-at-3-price levels feature.
Let's see how it did on our guinea pig market.
So... it made money, but in a bad way. Having no inventory control made the bot accumulate an enormous negative exposure during the whole trading period: its performance only got saved by an upwards swing in odds during the last few seconds before it closed its position. In fact a 6-tick swing brought its PnL up £10 from -£6 to £4. Not very healthy, since we don't want to rely on general (and random) price moves: it as well could have stopped trading before that price swing and lost money. On the other hand, there's still the interesting fact of it having made money by generally being short in a market whose odds trended downwards (3.9 to 3.5), as in against its position.
Good news: the order book imbalance indicator kind of worked: here the same segment from part 3 is plotted, together with the bot's orders that got executed. You can see that where the old version would have had the price go through it, the new version sometimes anticipates that and moves its quotes away. In addition, look at the part shortly after 12:10 where the price oscillates between 3.65 and 3.55: since the microprice after the price move is still close to the previous level, the bot doesn't place any orders at 3.6.
However, the fact that we don't have inventory control in this version hurts the bot immensely:
Look at that standard deviation: it's larger than that of the first naive version (£3.55)!
Let's put the insights from this and the previous part together and see if we can manage to make the bot not lose money. I combined the mitigation of inventory risk and adverse selection as follows: move the quote on one side away by one tick if either the order book imbalance is high enough or our exposure (inventory) is high enough. Effectively, if the bot would have moved its lay bet lower by 1 tick because of high negative order book imbalance (odds are about to move down) as well as because of high negative exposure (so the bot doesn't want to bet against even more), it would only move the lay bet by 1 tick, not 2.
On the other hand, let's assume there's a high negative order book imbalance but the bot has a large positive exposure. Should the bot still move its lay quote given that that quote getting hit would mean the bot would sell off a bit of its inventory? I reasoned that if the odds were about to move down, that would be good for the bot's profit (since odds going down is the same as implied probability going up, so being long benefits the bot) and in fact would allow it to offload its exposure at even lower odds later on.
So with that in mind, let's see how the brand new CAFBot does on our example horse racing market.
Look at it go. Basically a straight upwards line during the last 15 minutes and all of this while juggling its exposure back and forth like a champ. Beautiful. Let's take a look at the whole dataset.
Damn. At least it's losing less money than the version with moving the bot's quotes at high inventory values: in fact, twice as little (-£0.58 vs -£1.04).
Looking closer at what happened, looks like in some markets our fancy inventory control didn't work.
What happened here was that while the bot had a large long position and wasn't placing bets at the current best available lay, those bets were still matching as the prices would sometimes jump more than one tick at a time. The odds were violently trending upwards and so there weren't as many chances for the bot to close out its position.
How about if we get the bot to stop trading at all on one side of the book if its position breaches a certain level? While this makes the PnL distribution less skewed and slightly less volatile, it doesn't improve the mean much.
Meanwhile in the greyhound racing market, things weren't going well either.
Back to horses again, is there some way we can predict the money the bot will make/lose in order to see if some markets are not worth trying to trade at all?
First of all, it doesn't seem like the PnL depends on the time of day we're trading.
The dataset is mostly UK and US horse races and the time is UTC. The UK races start at about 11am and end at about 7pm, whereas the US ones run from about 8pm throughout the night. There are some Australian races there, but there are few of them. In the end, it doesn't seem like the country affects our PnL either.
In addition, the money the bot makes isn't affected by the amount of money that's been matched on a given runner 15 minutes before the race start.
...and neither is the case for the amount of money available to bet on a given runner (essentially the sum of all volumes available on both sides of the book 15 minutes before the race start).
That's a curious plot, actually. Why are there two clusters? I was scared at first that I was having some issues with scaling in some of my data (I had switched to using integer penny amounts throughout my codebase instead of pounds early on), but it actually is because the UK races have much more money available to bet, as can be seen on the following plot.
I also had tried some other additions to CAFBot that are not really worth describing in detail. There was the usual fiddling with parameters (different order book imbalance threshold values as well as the inventory values beyond which the bot would move its quotes) or minor adjustment to logic (for example, not moving the quotes at high order book imbalance values if getting that quote hit would help the bot reduce its exposure).
There was also Hammertime, a version of CAFBot that would move both back and lay quotes in case of high order book imbalance, in essence joining everybody else in hammering the side of the book with fewer offers. Theoretically, it would have taken a position (up to its position limit) in the direction that the market was about to move, but in practice the order book imbalance indicator isn't great at predicting larger-scale moves, so most of those trades would get either scratched out or end up as losses.
In addition, I had another problem, which is why I had started looking at whether it's possible to select markets that it would be better to trade in: submitting an order to Betfair isn't actually free. Well, it is, but only if one submits fewer than 1000 actions per hour, after which point Betfair begins to charge £0.01 per action. An action could be, for example, submitting a bet at a given level or cancelling it. Actions can't be batched, so a submission of a back at at 2.00 and a back at 2.02 counts as 2 actions.
This would be especially bad for CAFBot, since at each price move it has to perform at least 4 actions: cancelling one outstanding back, one outstanding lay, placing a new back and a new lay. If it were maintaining several offers at both sides of the book and the price moved by more than one tick, it would cost it even more actions to move its quotes. From the simulation results, running the bot for 15 minutes would submit, on average, about 500 actions to Betfair with a standard deviation of 200, which would bring it beyond the 1000-an-hour limit.
Throughout this, I was also working on Azura, the Scala version of CAFBot (and then CAFBotV2) that I would end up running in production. I was getting more and more convinced that it was soon time to leave my order book simulator and start testing my ideas by putting real money on them. As you remember, the simulator could only be an approximation to what would happen in the real world: while it could model market impact, it wouldn't be able to model the market reaction to the orders that I was placing.
And since I was about to trade my own money, I would start writing clean and extremely well-tested code, right?
Next time on project Betfair, we'll learn how not to test things in production.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!