Between a Double and the Deep Blue Sea
What are we building today that will blow up in our face in 2026?
There’s this company called MaxMind. If you were a web nerd in the early 2000s you probably know them: Since around 2000 they’ve released software that turns Internet addresses into geographic locations. You give the software an address like 192.168.1.1 and it gives you a latitude-longitude point. They made a lot of their data available for free. This was really useful and helpful if you were selling online advertising in 2005, back when you were expected to host your own ad-serving platform (i.e. before Google ran the Internet).
The connection between an Internet address and physical geography is obviously pretty tenuous. So when MaxMind gets an Internet address and has no idea where in the world it is, it spits out a default location that’s very close to the geographic center of the United States.
The problem is that this connects to a real house where people live. A journalist named Kashmir Hill at Fusion figured out what was up. She explains:
For the last 14 years, every time MaxMind’s database has been queried about the location of an IP address in the United States it can’t identify, it has spit out the default location of a spot two hours away from the geographic center of the country. This happens a lot: 5,000 companies rely on MaxMind’s IP mapping information, and in all, there are now over 600 million IP addresses associated with that default coordinate. If any of those IP addresses are used by a scammer, or a computer thief, or a suicidal person contacting a help line, MaxMind’s database places them at the same spot: 38.0000,-97.0000.
Which happens to be in the front yard of Joyce Taylor’s house.
And the people who live in the house have experienced all manner of weirdness as a result of living there, including visits from the FBI and a toilet left in the driveway. Hill’s story is great—it’s news, it’s well-informed and well-reported—and she got results as the result of her investigation.
Watching this story come up for discussion online over the weekend, I was surprised by the two camps that emerged in the comments. I saw a lot of discussion on Twitter, and also quite a bit elsewhere. This Metafilter thread is a pretty good example. It’s a lot to read, but it’s also an interesting primary source with lots of people talking about software and ethics.
From my view, and this is admittedly squinting really hard so it’s blurry, the conversation seems to divide into two camps:
- One camp, let’s call them the Anti-wontfixers, seems to be saying: This is severe negligence. MaxMind is responsible for the disruption and harassment these people have experienced.
- Another camp, call them the Featurists, seems to be saying: This is how software works, and what happened with MaxMind could happen to anyone whose software is applied in places where it obviously shouldn’t be.
Again, a broad characterization! And I’m less interested in who’s right (god knows I have no idea) than in what changed, and how people reacted. (Which is why this newsletter is called Track Changes.) As for MaxMind: Their CEO said to Hill: “We have always advertised the database as determining the location down to a city or zip code level.” They acknowledged that a problem existed and said they’d fix it by putting the default location into a body of water.
That MaxMind returns a default location is testament to the hot garbage that was open Internet software of the early 2000s. In the wooly, half-amateur world of turn-of-the-millennium web coding there were vanishingly few robust APIs and not that many widely-understood coding standards. It makes perfect sense to 2002 me that instead of returning a “not found” error message, like a mature piece of software would today, that the GeoIP tool would return a default location and call it a day. Given what a bunch of drooling clowns we were back then, not having error conditions was probably a good idea, because it meant that the incompetent programmers like me who built early websites collectively would write less bug-ridden garbage code. Meanwhile, 2016 me is properly horrified, horrified, you’re all fired, shut it down.
To “fix” the problem, as Hill reports, MaxMind is going to return a location in a body of water—not an error message. Because an error message would break everything. Once you lock in and get users, it’s super-difficult to change things.
Consider also: Google Maps didn’t exist. There was no obvious way to zoom in on a single house by latitude/longitude and see, like, the shape of the driveway. Google Analytics didn’t exist. Huge ad networks didn’t exist. There were fewer targets for online crime, and thus less of it. And online harassment existed, but it was focussed on relatively few targets; it wasn’t epidemic and vicious, and it couldn’t be done so casually, as a hobby.
So while I don’t know who’s right, the Featurists aren’t wrong: When this software was created, the thoughts that would be necessary in order to predict this outcome were science-fiction thoughts. And even then most people imagined that only the government had real spying capabilities (as in the film Enemy of the State).
But Anti-wontfixers are likely more representative of how software is, and will be, received by the broader, less technology-centric world. This camp empathizes with the people who were harassed, and they don’t care as much why something happened. They ask: Given the problem, were there no chances to identify and fix it?
Watching Congressional hearings on technology issues, like the ones over Healthcare.gov, you can see that, no matter how often software people might say “bugs happen!” the wider world—including the people who run it—is less sympathetic. Malice or not, many superfund sites were once industrious workplaces. People get suspicious.
As the stakes have gone up to the billions of people and trillions of dollars, and the level of detail has gone down to street view, to the point where we have to blur the faces of humans, things that were once entirely meaningless—like a set of coordinates on a map that pointed presumably nowhere—are now very meaningful. Because there is so much data, and so many humans to incorrectly interpret it.
Hill’s story directly involves a small number of people (those who live on the troubled property and MaxMind), but it also involves hundreds of millions of IP addresses, and kicked off a larger discussion around technology and its role in our life. When you read that MetaFilter thread, you can see a world of opinions emerging that will not simply resolve to one, simple ethical framework that addresses the needs of the technology industry and the wider populace. People feel this stuff. It’s gonna take a while. In some ways, even though we’re 60 years into a digital world, we’re just starting to figure things out.
One thing that gives me the willies is that there’s tons of code out there like that MaxMind code. When new services come along, suddenly the system you built is an engine of harassment and you might not even know it, or the people being harassed might not even be able to figure out why it’s happening. It makes me wonder what time bombs I’ve personally created, or built software on top of. The answer is: Probably some. Also, what will the next ten or fifteen years look like? What are we building today that will blow up in our face in 2026? We’re building a lot more software. The opportunities for face-explosion are legion, and the number of ways that things can go horribly wrong approach infinity.
Let’s have a good week out there.
Story published on Apr 11, 2016.