[email protected]
Product

Building Kind Social Networks

What I learned creating a private, anonymous social network.

By

For better or worse (lately, mostly worse), I’m a regular Twitter user. And as Twitter increasingly becomes part of our national dialogue (see prior parenthetical) while publicly grappling with some of its abuse problems, I find myself thinking harder about anonymity—the role it plays in the internet, and the balance of harm it inevitably tilts.

Anonymity is…kind of a thing with me. Before I joined Postlight I spent a couple of years running Airendipity, a sort of twee social network in which everyone was anonymous and all you could do was send little paper airplanes with some text inside. The paper airplanes would bounce around the globe from phone to phone, picking up placenames (your plane just landed in Cairo!), hearts, and comments along the way.

You’ll notice that last sentence was in the past tense: Airendipity is no longer, as I found it difficult both technically and ethically to charge money for a service people used to do stuff like come out as gay in a country where that was illegal. But in running it I learned a lot about how to prevent abuse, how to cultivate kindness in online community, and—unfortunately—how easy it is for the creators of social networks to absolve themselves of any real emotional work.

Addressing abuse is a switch, not a scale

The proprietors of social networks, especially ones that are totally open and unidirectional like Twitter, like to pretend that coralling abuse is a balancing act. They treat their networks like a free market, provide some paltry regulation in the form of reporting tools—often buried in arcane menus or requiring technical skills like screenshotting to even submit reports—and claim that the system will reach natural equilibrium on its own, like a weighted scale.

But it never will: the cost of sending abuse (create account, write dumb thing) will always be less than the cost of receiving and regulating abuse (see it, feel bad, fill out report forms, block single user, try to fall back to sleep).

In reality, addressing abuse is not a scale, but a switch. The system either favors the abuser or the abused. There are many reasons for this, and we could get real philosophical on the nature of humankind here, but an important reason is that the network’s idea of anonymity is different from our lived experience of it.

My name is Egg

In a place like Twitter, an anonymous account is an egg avatar with a name like ClodiusPulcher96, and a “real person” account is a picture of me in a Halloween unicorn costume with a name like Kevin Barrett, right? And on Facebook I have to use my real name, which means what I post there is a 100% accurate representation of who I am in the real world. Except that I feel totally cool on Twitter wishing the worst kind of plague (boils) on every handle associated with United Airlines I can possibly find when my real non-paper plane spends an extra 30 minutes on the tarmac looking for a gate. And on Facebook I can, without a second thought, end real-life friendships over something dumb left as a comment on an old tagged photo.

Even when writing behind images of my own face and under my real name I still feel somewhat anonymous. Twitter and Facebook disagree. And so the switch of abuse there favors the abuser, because the abuser is shielded by their sense of anonymity. The abused is not. They feel everything.

This can be fixed

I built Airendipity by myself. It was my first time building anything on the server. And in the end I had a working system whose switch was flipped in favor of its users, not its abusers. I’m going to lay out a couple of ideas I used to make this happen, but it’s worth keeping in mind that I was a neophyte Rails dev and that the fixes available to giant tech companies are so much more subtle and sophisticated that it’s unconscionable they have yet to be implemented.

  1. Trust your users. If they say a piece of content was intended to harm them then remove it, at least temporarily. The great (heretofore unrealized) fear that users will abuse the actual systems for reporting abuse is not worth a single person’s fear for their own life. Plus, it’s pretty easy to tell the difference between someone who’s reporting people they dislike from someone who’s reporting intended harm. I know, since I did it.
  2. Do not guarantee a platform. Somewhere along the line social networks stopped billing themselves as fun toys and started acting like basic human rights. Until the UN buys Twitter (swoon) that’s not the case. In Airendipity, if you got reported a lot, you were out, at least for a little while. Again, the same diligent counter-reporting algorithms apply. But it’s your platform—even building it is an opinionated act, so you might as well make that opinion something like trolls are not a necessary part of societal discourse.
  3. Embrace context. Airendipity only allowed you to leave a public comment on another’s airplane after you’d read its full journey—all the hearts and comments it had already accumulated. There was no re[tweet|blog|broadcast] mechanic to remove an airplane from its context. Participating in Airendipity meant seeing the entire community. Airendipity is the only social network in which I had ever seen calm, considered discussions about hard topics. It felt good to talk there, never fraught or hopelessly charged.

Airendipity wasn’t perfect and is no longer, but I think it’s important for those who use (and, like me, often really enjoy) social networks to understand that abuse there isn’t the inevitable result of our collective humanity in the same way that the Wild West wasn’t wild because it was the west. There are (and were) just very few people on the inside trying to tame it. Right now the switch is flipped the wrong way. It doesn’t have to be.

Kevin Barrett is a senior software engineer at Postlight.

Story published on Dec 14, 2016.