In the News
The difficulties of digital marketing in 2018
Digital marketing is, by its very nature, a dynamic environment. Many businesses have come to base a huge proportion of their marketing communications on digital and social media, and it is fantastically useful for them as a method of gathering and analysing data for marketing decision making. There is a strong duopoly at play, with Facebook (also owning WhatsApp) and Google (also owning YouTube) being the biggest platforms for businesses to communicate with their markets.
However, that environment is becoming more difficult. Any business's corporate social responsibilities mean that they cannot allow their brand values to be damaged by the social media platforms. Back in February, Keith Weed, Unilever’s chief marketing officer, gave a speech at the annual Interactive Advertising Bureau conference. Unilever owns brands like Lynx, Marmite, Dove, Persil and PG Tips, and is the world’s second-biggest spender on advertising. The central theme of his speech was that "Consumers don't trust what they see online."
Coming from a company which spends over two billion dollars a year on online advertising alone, this is a serious message. Weed referred to fake news, racism, sexism, terrorists spreading messages of hate, and toxic content directed at children, and threatened to pull ads from major platforms such as Facebook and Google unless the digital giants take steps to filter out misinformation and abusive content.
This strong message came nearly a year after dozens of big advertisers suspended their advertising on YouTube, after the Wall Street Journal found that Google’s automated programs placed their brands on five videos containing racist content, but before the Cambridge Analytica story hit the news in March this year.
It is not surprising, then, that Facebook is trying to win back the trust of the advertisers. In the face of growing criticism it is publishing, for the first time, its specific rules for taking down content which has been reported to the social network’s moderators. The 27-page document gives Facebook’s definitions of hate speech, violent threats, sexual exploitation and more. The company will also, for the first time, give people a right to appeal its decisions.
Bloomberg's report says that the company has 7,500 content reviewers, up 40 percent from a year earlier, working in 40 languages. Facebook also has said it’s working to increase the number of workers who speak the various languages that require more attention. But with 2.2 billion users around the world, that means each reviewer is responsible on average for a massive userbase, and in a blog post to accompany the publishing of the new code, Monika Bickert, vice president of global policy management, said that “Our policies are only as good as the strength and accuracy of our enforcement – and our enforcement isn’t perfect. In some cases, we make mistakes because our policies are not sufficiently clear to our content reviewers. More often than not, however, we make mistakes because our processes involve people, and people are fallible.”
How to address that problem?
Now that Facebook has published the policies, it’s asking for feedback to edit them. The company will host a series of events around the world to solicit advice, starting in May. “Our efforts to improve and refine our Community Standards depend on participation and input from people around the world,” Bickert said.
Meanwhile, Facebook subsidiary Whatsapp is banning under-16s from using its platform in the European Union. Users must currently be at least 13, but the firm is changing the rules ahead of the introduction of new EU data privacy regulations in May.
The app will ask users to confirm their age when prompted to agree new terms of service in the next few weeks. It is not clear, though, how the age limit will be verified. Perhaps some of the 16- and 17-year old Business students who are also WhatsApp users can come up with some suggestions?