Analyzing And Fixing Facebook’s Death Spiral

Facebook is in trouble; they are bleeding advertisers at a rapid rate with estimates that they have now dropped into quadruple digits in terms of companies that won’t touch their platform.  Facebook is betting these brands will come crawling back, and they could be right. Still, it is doubtful the brands will return until well after the November election and when things become significantly calmer than they currently are. 

This week let’s explore the nightmare that Facebook finds themselves in, how they might be able to dig out. 


Selective Censorship

I’m a part-time moderator, among other things, and while my load isn’t significant, I get copied on what my peers do, and moderating, even in a calm year, isn’t easy.  You have the fuzzy concept of “free speech” and a changing set of rules regarding acceptable language.  Besides, any decision you make will likely remain on the web indefinitely, and, given the changes, what might seem to be a reasonable decision today may not seem so reasonably months or years into the future.   

Let’s take the term Blacklist; it was undoubtedly both widely used and acceptable a few months ago, and does not have a racial origin (black wasn’t used to define a race when it was created). The term was in extensive use in politics, retail, and even programming (mostly security).  But now we are working to remove it from most media including web sites and even in commented code because it is viewed as racially insensitive.  Now you could waste a ton of time fruitlessly arguing against this change, which would make you appear racist and hang your future on Free Speech or recognize that the world isn’t fair and buckle down to eliminate the term.   (Between you and me, the latter path will almost always be the better path in terms of cost and reputation).  

On top of that, people can be complicated to deal with when you are moderating them.  For instance, in a sister forum to the one I moderate,  a user was being abusive to anyone he thought was asking a dumb question.  But this user was also very engaged and very helpful when not being abusive.  One of my peer moderators warned him about his language and attitude to which he responded he wasn’t going to change and then dared the moderator to ban him.  And the moderator did.  The result is that in a forum with dropping interest, we removed someone that was driving engagement for the cause, which also damaged the forum.  

The result is a friendlier forum but also one that is more likely to fail than it was because we’ll lose engagement and, on a forum, engagement drives interest and eyeballs, which, in turn, drives advertising revenue.    

Now you take that individual instance, and you blow it up to Facebook’s volume, and you should be able to see the problem.  If they automate and correct at machine speeds, they’ll lose users at machine speeds, and, given their volume, they are sure to aggravate groups of people that don’t agree with their decisions.  And, if they don’t, they will still aggravate groups of people that don’t agree with that decision either.  

What they did is try to find some screwy middle ground where they allowed unacceptable behavior by influential people, which didn’t mitigate the problem with censoring and also still pissed off the other side.  In short, their path upset both the Republican and Democratic parties, which dominate, collectively, the population of the country where they are headquartered.  


Pick A Side

While it often sounds like you are being reasonable by going down the middle of an issue, the more likely result, which we see play out here, is that you end up upsetting both sides and doubling the damage you take.   Facebook is bleeding both users and advertisers because both sides now think Facebook is unreasonable and, in a way, both sides are right.  

When it comes to behavior like racism and hate speech, there is no safe middle ground because the result is binary.  It either is or isn’t.  So this middle path upsets both those that want their own words to go out unaltered and those that find the result unacceptable.  More importantly, when you touch and then allow something, it creates the appearance you agree with it, which now has become financially damaging to Facebook.  While they might not agree with what was said, by allowing it to pass through their control system as an exception, they are effectively blessing the act.  Given this position is inconsistent with its global policy, Facebook is sending the message that some are above the company’s rules.  

Twitter, which deployed a much more consistent policy, has, as a result, gotten far less heat and isn’t currently losing users and advertisers.  Their position, while also not perfect, appeared fairer and at no time was the objectionable language attributed to them.  


Wrapping Up:  Getting Out Of This Mess

The fix now goes beyond the implementation of a consistent policy because that boat has mostly sailed.  Facebook needs to put someone in to handle this effort and likely run the company that better understands human behavior and has the political experience to deal with governments properly.   They still need that consistent policy consistently applied, but the firm now needs to do damage control and avoid seemingly addressing unacceptable behavior.  

Facebook needs to become a company that people again trust and start behaving strategically because they are rapidly becoming buried by a sequence of bad tactical decisions that have to be reversed.  If they don’t, the firm is likely to be both broken up and heavily regulated, which will likely, eventually, kill it.  

In the end, with moderation, you either do it right, consistently, and thoroughly or not at all.  Any other choice will be far more damaging and maybe even deadly to the company.