Press "Enter" to skip to content

Facebook now deletes posts that financially endanger/trick people

It’s not just inciting violence, threats and hate speech that will get Facebook to remove posts by you or your least favorite troll. Endangering someone financially, not just physically, or tricking them to earn a profit are now also strictly prohibited.

Facebook today spelled out its policy with more clarity in hopes of establishing a transparent set of rules it can point to when it enforces its policy in the future. That comes after cloudy rules led to waffling decisions and backlash as it dealt with and finally removed four Pages associated with Infowars conspiracy theorist Alex Jones.

The company started by repeatedly stressing that it is not a government — likely to indicate it does not have to abide by the same First Amendment rules.

“We do not, for example, allow content that could physically or financially endanger people, that intimidates people through hateful language, or that aims to profit by tricking people using Facebook,” its VP of policy Richard Allen published today.

Web searches show this is the first time Facebook has used that language regarding financial attacks. We’ve reached out for comment about exactly how new Facebook considers this policy.

This is important because it means Facebook’s policy encompasses threats of ruining someone’s credit, calling for people to burglarize their homes or blocking them from employment. While not physical threats, these can do real-world damage to victims.

Similarly, the position against trickery for profit gives Facebook a wide berth to fight against spammers, scammers and shady businesses making false claims about products. The question will be how Facebook enforces this rule. Some would say most advertisements are designed to trick people in order for a business to earn a profit. Facebook is more likely to shut down obvious grifts where businesses make impossible assertions about how their products can help people, rather than just exaggerations about their quality or value.

The added clarity offered today highlights the breadth and particularity with which other platforms, notably the wishy-washy Twitter, should lay out their rules about content moderation. While there have long been fears that transparency will allow bad actors to game the system by toeing the line without going over it, the importance of social platforms to democracy necessitates that they operate with guidelines out in the open to deflect calls of biased enforcement.