By guest blogger Nicola Michel
The recent ruling by the Advertising Standards Board
that advertisers are responsible for third party posts on their Facebook page has, not surprisingly, been labelled "a challenge" by the body that represents the $30 billion a year marketing industry. "Challenging" barely begins to address the ramifications of the decision, which goes to the heart of whether social media campaigns are even viable in a world where marketers are held responsible for the ill-judged comments of followers.
A number of examples ignited the fire under the ABS, but chief among them were some of the comments posted on the Facebook page of Carlton & United Breweries (CUB's) 'VB' beer brand, when followers were asked a supposedly innocuous question: "Besides VB, what's the next essential needed for a great Australia Day BBQ?"
Whether or not you consider the question innocuous (the phrase 'asking for trouble' springs to mind), the majority of the answers certainly weren't. Of those that actually made sense, they ranged from the vaguely moronic to the downright distasteful, spanning the gamut of sexist, racist and homophobic.
The North Korean Solution?
The basis of the ABS decision was that it deemed the Facebook page of an advertiser to be a marketing communication tool over which the advertiser 'has a reasonable degree of control'. As such, the Code that applies to all advertising also applies to the page in its entirety, including comments posted by third parties.
Outrage over the decision has been fast and furious, with some calling it the "North Korean version of social media" and others questioning whether, if brands are required to censor offensive or misleading comments, how long will it be before they censor negative comments about their brand (which is not to say that the latter has not already been happening...)
Most comments centred on the fact that the decision changes the very essence of social media as a two way conversation that reflects the rough and tumble of real-time, spontaneous social interaction itself.
Some of the comments on the CUB VB Facebook page were described as akin to those you might hear in a dodgy pub late on a Friday night. This may hardly make them what you yearn to hear, but does that also mean they shouldn't be posted online? If social media is an extension of the conversations we have in life, is there a place for censorship?
One essential difference between Facebook and the pub is, of course, that a hazy conversation on a Friday night is said and done, whereas what goes online, stays online. And in the case of the VB Facebook page it really did stay online, with some comments left posted for over a year. It's likely that the situation would have been very different if the comments were removed quickly.
Regulation: guaranteed to fail?
Although regulation seems to fly in the face of the nature of social media, this and numerous other examples lead some to the conclusion that some form of legislated (attempted at least) censorship or control is inevitable.
Many major social innovations have required evolving regulation. Few would argue that the laws covering driving, for example, after the advent of the car, should never have been enacted.
However, I would argue that increased and, in particular, blanket regulation usually ends in unintended consequences - which often defeat or make a mockery of the purpose of the regulation in the first place.
And even if you accept that most people would find at least some of the comments on the VB Facebook page offensive, (and that therefore some regulation is required) who decides which of the other comments are offensive and which aren't - or which move from the grey realm of offensive to discriminatory and inciting of hatred? And how on earth do you police it? Some brands get thousands of comments per week on their Facebook pages. Do they need to employ an army to monitor them?
An army of social media monitors?
The answer regarding monitoring is probably "yes". But you probably won't need an army.
So what is required to manage and moderate a company's social media activity?
The conventional wisdom has been that the essence of social media is that it comes straight from the horse's mouth, and that employing a PR company or other 'mouthpiece' to communicate for you on social media is somehow cheating and depriving the medium of its immediacy and relevance. That's absolutely fair enough. But equally, the evolution of social media has been so rapid that a company now demonstrably needs to address the issue of protection and reputation and, bottom line, staying out of court.
The good news is that there is middle ground: the space between having any social media so stage managed as to lose its meaning (think, London 2012), and stepping to the abyss and subjecting your brand and business to a damaging free-fall.
That middle ground involves a combination of using the new and evolving tools available, and having a solid, workable and well understood social media policy in place.
Facebook, for example, provides some ready-made tools. The recently updated Timeline for brands gives page administrators the ability to pre-moderate comments, to restrict access to underage Facebook users, to restrict the kinds of posts users can share and to set "page visibility", so administrators are required to approve all posts that appear. Critics say that not only does this pre-moderation substantially increase the workload for page administrators, it seriously affects the brand's ability to have the types of real-time conversations with followers that are what Facebook is all about. Nonetheless, it does exist and is an option.
Social media policy: the latest must-have accessory for corporates
When it comes to social media policy, a good start is to review the excellent McKinsey framework
for companies engaged in social media. According to that framework, the very first step is to monitor
. The next is to respond
to consumers' comments. Few would believe, for example, that CUB really wanted to encourage racist and sexist comments or to have those comments associated with their brand. If they had been monitoring, they would have been able to respond, potentially by taking the comments down. So, while some marketing executives are screaming about the difficulties and costs associated with monitoring, surely monitoring is a necessary cost associated with using social media and needs to be weighed against the benefits it provides as well as, significantly, the risks of not
engaging in the conversation at all?
The fact is, that if well done, it really isn't that hard. Not only are the tools already out there (and improving all the time), most brands using social media effectively are monitoring their social media presence already (and if they're not, they should be).
Ultimately, the whole debate over the ASB decision highlights the fact that companies need to engage with their social media presence in the way they hope consumers will engage with their brand in the offline world.
That means, just as a company has guidelines around what it says and does in real life, it needs guidelines around social media that, among other things, removes the doubt and grey areas around what's offensive or illegal (and should be removed) and what constitutes robust, vitriolic and hard-to-hear criticism of their brand - and should stay online and be responded to.
If you don't have the skills or the resources to effectively monitor your social media presence, or don't know where to start, seeking expert help to get set up and potentially monitor responses down the track can be a good move.
The latter seems to be the move that CUB has taken: it was 'managing' its Facebook page itself and has now given the responsibility to an external agency.
While it's easy to say in hindsight, it looks like CUB could have saved a lot of pain by getting some help setting up a policy that involved monitoring and response in the first instance - and in the absence of the in house resources or skills to continue to do so, engaging a social media partner to do it for them.
It really is Reputation Management 101.