a timeline of responsibility

[estimated reading time 8 minutes]

much of the modern internet was created by a simple piece of legislation – the american “communications decency act”, specifically section 230 where it says “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. if you’re not a lawyer (which i’m certainly not), this has a very simple meaning. facebook, instagram, twitter and their social-media siblings can allow people to put whatever content they like on the internet and they’re not responsible. ever. they can’t be charged with a crime even if what’s posted there is criminal.

of course, this has built the internet, allowing people to share information without the companies enabling that sharing to take legal responsibility for it. and that’s generally a good thing because being able to share photographs and videos and messages is exactly what made the internet the single communications platform people rely on for everything from their meetings to their birthdays to their intimate affairs. the internet is all things to all people. but it’s not safe.

the next section of that law goes on to say any company that removes information that’s offensive or illegal is protected from the people who put it there in the first place and that’s definitely a start on what we now call “content moderation” but it’s very weak. it doesn’t mean they have to remove things that are problematic. it just means they can if they want to and not get in trouble for it.

many people (mark stoller, for example, one of the best-known critics of the law) have written calling for it to be eliminated. sadly, if it is, the internet as a platform for social media will simply stop existing. well, not quite. if the law stops existing and is replaced by another guaranteeing freedom from prosecution for the views of users on public platforms, it will continue. but without those protections facebook and instagram will open themselves to persecution with every post shared and simply won’t be able to exist — and people won’t post if they know each thing they submit has to be moderated before it goes live because that won’t allow instant-sharing, the whole reason the internet has become ubiquitous. delays aren’t part of people’s understanding of how technology works.

so we’re at an impasse, right? keep the protection and companies can allow hate speech, racism, discrimination and even calls to violence, not to mention intentional and harmful misinformation (trump, johnson and putin, i’m looking at you!) to continue to expand exponentially as automated writing tools and modification engines flood the protective walls of social media empires. remove the protection and suffocate the internet by starving it of its lifeblood — its continuous source of new material that makes it a place for young people to continue to spend every waking moment obsessively consuming its clickbait and conspiracies.

well, no. and this is the case for two completely different reasons but their combination may paint a way forward if the weak and mindless american leadership can contemplate action for a change to try to improve things. (given their lack of willingness to actively engage in protecting the ukraine out of either disinterest or racism, which i’m not quite sure or perhaps a combination, is unlikely and this is quite sad.) the first is that this is an american law and something else is working in a much larger population base already. the second is that the whole concept has ignored the idea of time — the internet used to be a permanent publication platform and in some ways still is but in more ways than ever it’s all about what’s happening right now, in this moment. this law comes from a time and understanding of the internet like a collection of books. the internet has become a never-ending stream of broadcast and that’s not the same thing.

first, though, let’s take a short trip east. you probably know wechat and its little brother weebo but tencent video and douban may be new to you. there are just as many massive social media platforms in china, though, as there are in all the western countries, which isn’t surprising because there’s as much population in china as in europe and the americas combined. and where there are people living modern lives there’s internet, social media and the question of responsibility.

while europe has realistically been able to ignore this problem completely (we know how the uk would answer the question, given that its government is so anti-education and anti-knowledge, though how the divided populations of france, germany and the scandinavian countries would see the issue is potentially more nuanced though irrelevant given the complete lack of serious media players in any of them), the american answer has provided the groundwork in the west because that’s where the companies live and have to be regulated. the approach the chinese government has taken to regulating information is much less black-and-white than its american counterpart — an oddity if you know much about chinese law, where strictness and public responsibility tend to be far more emphasized in favor of protecting the general public.

the simple answer, though, is this. people are much more responsible for the content they create. this has been somewhat overlooked in the mess in the west. the idea has been that either the company publishing it (facebook, instagram, twitter…) is responsible or nobody is. but that’s leaving out the person who made it in the first place. there is certainly a tradition in the west of “free speech”. but that free speech is a massive problem. while i don’t support the idea of free speech in general, this is a very specific case of it where it’s even more difficult to justify than elsewhere. because the internet isn’t a private room where everyone should certainly be allowed to talk without restriction. it’s a public stage. and we have laws, not to mention social guidelines, about what you’re allowed to say in front of a massive audience without it being considered reprehensible, dangerous and, in many cases, inciting violence and hate.

the chinese approach has been mostly to consider anything posted on the internet a public statement and judged things like misinformation and hate speech not on the basis of free individual interaction but by the same laws that would apply to any other distributed publication like a newspaper or book — spread hate, disinformation or offensive material and you will be prosecuted. this is the first step the west needs to take to deal with such things as misinformation about politics and health crises (antivax people, you can have a choice either of getting a shot or being shot but you have to choose one and choose it now).

it would also be a significant portion of the solution to deepfake (not just pornography but all deepfake) by making all individual creators legally responsible for the existence of artificial and misleading images and text and making such creation and existence illegal with the application of fines and jail — if the existence of an artificially-created photograph of a person without their consent was enough to put its creator behind bars for a decade, their proliferation would quickly come to an end.

second, though, is something a little more subtle. while american politics and laws are not known for their subtlety, this might be a case where a new leaf has to be not just turned over but rebuilt from the roots. we can stop thinking of the internet as a single thing — a mass-distribution platform — and start imagining it as two parallel streams — one of a flow of information and the other as a permanent publication repository. in other words, the internet is a public forum and a public library but not both indistinguishably at the same time. yes, as i said, this is subtle and nuanced.

as a public forum, people could continue to be able to interact whether in private discussions or in the public sphere with relatively little moderation. they would still have to be held accountable for what they say in the context of “if this was written in a newspaper article distributed around the world in print, would it be considered acceptable”. but that would realistically not be a huge limitation and it is perfectly reasonable to hold adults interacting accountable for the words they write and speak. if we can’t be responsible for our actions, we cease to be humans, don’t we? this is where instant-access social media, for example, would continue to thrive.

as a public library, however, far more moderation would not simply be permitted and protected if companies wished to do it but absolutely required. a law could be introduced to add the publisher to the list of those accountable in the legal and public sense for the content on their platforms — not immediately but after a time. for example, a law stating that social media publishers like facebook were responsible for all content published on their platform to public groups after twenty-four hours and to limited groups after seventy-two hours would mean they could continue to allow freely-posted information with only the creator taking responsibility for it for the first day (if public) or three (if to a smaller group) before their moderating team had to also be liable for it. it would give them time to remove it if it violated regulations about what is safe, true and appropriate without stifling the simultaneity of modern internet life. of course, this wouldn’t remove the individual creator from responsibility and that brings us to the third piece of this (yes, i know i said there were only two but this is actually a requirement of both so whether it’s a third is up to you to decide) — identity.

i have very strong beliefs about personal identity on the internet but a softer version would actually make both these solutions possible and a combined resolution to much of the problem of hate and disinformation on the internet. first, though, the complete and thorough answer before we look at how it could be solved without going that far, though i believe we eventually should continue down that path to the end.

i believe all information shared on the internet should have a name behind it. not a company name, not a pseudonym, not an organization. a name of a real person whose identity has been verified either by a government or a government-regulated private entity (facebook, for example, could have a department responsible for identity-checking its users). this would mean that each individual would have a single voice and be held accountable, not just legally but in the world of public awareness and opinion, for their actions. this may sound extreme but think about what the internet is. it’s like live television of the days before the internet. imagine in the sixties or seventies the idea of people getting on a live television news broadcast but neither the television network nor the viewers having any idea who the person is. why is this suddenly accepted today? not only does nobody know if you’re a dog on the internet, as the meme suggests, they don’t know if you exist at all or how many theoretical people you exist as — are you a bot? do you represent yourself or others? this must end or truth will simply cease to exist as time goes on.

that being said — and i’m certain the survival of our species and society depend on individuals being held accountable for what they say and create — it’s possible to implement both pieces of the solution we’ve been talking about today without walking that road to its inevitable end at the moment. don’t misunderstand — i am talking about eliminating anonymity and that’s fundamentally necessary for the internet to ever be a safe place. but it is possible to do this in a far-less-public way to get the first piece of the results to function in terms of social media responsibility.

while i believe it is necessary in the long-run for everyone to be responsible, liable and accountable publicly to everyone, perhaps all we need at the moment is for them to be accountable in the governmental sense — legally. what that would mean in practice is that they would have to be identifiable to the publication platform and that this information would be continuously-accessible by the government of the company where the platform existed and the government where the individual creator was at the time of creation but not to the general public. let’s think about this as an example. let’s say someone in germany posts on facebook. facebook would have to know who they are — not who they say they are but their real, passport-validated identity. that information would be attached to each of their posts but not shared on the site. the american government (home of facebook) and the german government (home of the act of creation and posting) would know who the person was. the person would then be held responsible for anything they shared (which should mean everything is ok because almost all content is perfectly fine — we’re only talking about hate speech and misinformation and other illegal content being restricted, not the general free flow of conversation). after a period of days, facebook would also be responsible but the creator would continue to be held accountable for it, too. what this would do is pave the way for the elimination of misinformation, artificial content and hate speech without being nearly as damaging to the social media companies as total responsibility for content or as weak as the current protective legal framework.

what this means is that, despite many voices arguing the opposite, there is a clear path out of this that doesn’t perpetuate the cycle of hate and harm but doesn’t destroy the basis for the internet and its fundamental free-flow of information. i hope you have gotten something out of this path of thoughts today. thank you so much for lending me your eyes and minds for a few minutes.

share on social media...
thank you for reading. your eyes have done me a great honor today.