Much of what we do in social media hinges on the assumption that I can’t be held liable for something you post.
Obvious, right?
On April 2020, the defamation case of Defteros v Google LLC [2020] VSC 219 reached its conclusion in a Melbourne courtroom. The judgment makes it possible in Australia for a person to sue the author of a defamatory post and the platform where it appears – such as Google for linking to the defamatory content in its search results.
Then, on June 1 2020, the NSW Court of Appeal upheld last year’s controversial pre-trial decision in Dylan Voller’s defamation case against Fairfax Media. The ruling makes Fairfax legally responsible for defamatory third-party comments and content posted to its Facebook page.
The decision says that, by setting up public Facebook pages, news outlets “encouraged and facilitated” the comments. Of course, the same could also be said for any other company with a social media presence, which is why the decision could have ramifications across the local social media landscape.
If the sorts of comments you receive on your Facebook page are pretty innocuous, you may not be too concerned. A soft toy retailer is less likely to see troll wars break out under a post about teddy bears.
But I’d hesitate from assuming any brand or industry is immune from defamatory and otherwise objectionable comments, just as Vegemite discovered in 2015 when its Facebook page came under sustained attack from the Boycott Halal movement. Could you have predicted that? I didn’t.
Moderators of Facebook pages can already hide or delete defamatory and other unwelcome comments. However, it currently isn’t possible to switch off comments entirely. As a result, moderators can only operate reactively – taking down inappropriate comments after they’re posted, sometimes hours after the damage is done or offence taken. Not every company can afford a 24/7/365 social media team just in case Trollmeister3000 decides to pick a fight at 2am.
Of course, Facebook pages already have a profanity filter to catch offensive words. But you can also specify and block other words that might indicate a comment needs to be checked before appearing on the page. Both tools automatically hide the comments until you have a chance to review them. Then you can then unhide any you decide are acceptable.
The downside to relying on filters is it can put a brake on engagement and discussion. Users can also become irritated when they see comments from other users appear unimpeded while theirs languish in purgatory due to the Scunthorpe problem.
And, of course, filters are an imperfect solution when there’s no guarantee every potentially defamatory or law sensitive comment will include one of your specified keywords. Plus, trolls and other troublemakers can soon learn which words to avoid so as to beat the filters.
So; how should companies react?
No doubt some marketers will re-evaluate whether maintaining a Facebook page is worth the increased risk – particularly when taken alongside algorithm changes that have gradually eroded the reach of content shared from brand pages. And then there are the controversies that have dogged the platform for the last couple of years, causing some companies to leave for ethical reasons.
Even so, Facebook remains incredibly popular for both users and brands. And 33.3% of marketers rank Facebook as providing the highest ROI on paid ads – more than any other channel. Killing your page feels more like it should be a last resort than a first response.
So, it comes down to your objectives for having a Facebook page. It’s not enough to run a Facebook page just because everyone else does. What problem does it solve or benefit does it give to your company? And how does Facebook help you to achieve this objective more effectively or efficiently than any of the other alternative channels available to you?
Then, consider whether you may need to change or adapt how you run the page in light of the recent Fairfax ruling. New processes? More experienced staff? Less controversial content themes?
The ramifications of the Appeal Court judgment are unlikely to end with Facebook, which is why you should also review your other social media channels and the different tools or features with which to moderate them. The presence of these tools could make it much harder for a company to fend off a defamation case if it doesn’t use them to defuse the post or content in question.
For example, since last year, Twitter users are able to hide individual replies to a tweet – effectively moderating their own threads. And now Twitter is testing a new feature allowing users to restrict who can reply to a tweet. In short, brands could use the feature when sharing content or tweeting a message likely to attract the attention of the ALL-CAPS people and other controversy-chasers.
Nine and News Corp are already planning a High Court challenge against the latest ruling, so it is possible the law could change further – in either direction. However, if there’s one lesson to take from this, it’s that Australian companies need to take greater responsibility for everything that happens within their social media spaces.
LET’S CONNECT