Recently, the Court of Justice of the European Union (CJEU) issued its ruling against Facebook that if a piece of content is deemed illegal in the European Union, it has to be taken down worldwide—not just inside Europe, but also beyond.
The ruling stems from a case brought by an Austrian Eva Glawisching-Piesczek, who demanded the removal of a Facebook post concerning her that the court found insulted and defamed her and which could be seen by any Facebook user.
According to the Court, Facebook as a host provider, maybe ordered to remove or block access to information which it stores, the content of which is identical or equivalent to the content of information previously declared as unlawful. In addition, it may be required to remove or block access to such information on a worldwide basis “within the framework of the relevant international law.”
The ruling created perhaps a good precedent where Courts in the European Union can order Facebook to remove worldwide comments by users of its service that have been declared illegal (i.e. unwanted or offensive). The court ruled that EU law “does not preclude [Facebook] from being ordered to remove worldwide information disseminated via a social network platform.”
Today, Facebook is regarded as the largest social media platform in the world. It’s estimated that it has 2.4 billion users. Other social media platforms including Youtube and WhatsApp also have more than one billion users each. As is known, WhatsApp Messenger is a freeware, cross-platform messaging and Voice over IP service owned by Facebook, Inc.
As a matter of principle, EU law does not preclude a host provider, like Facebook, from being ordered to remove identical and, in certain circumstances, equivalent comments previously declared to be illegal. The ruling has drawn a new regulatory line in the sand for Facebook, which has struggled to moderate the hoards of content uploaded by its 2.4 billion monthly active users worldwide.
In response to such a disappointing ruling, Facebook pledged to build a supreme court-like body to oversee content moderation on its social network. The nominally independent body will have the power to make binding decisions about what is allowed on Facebook.
How such a decision is equally relevant to Africa?
According to global digital agencies, Africa has seen the fastest growth rates in internet penetration. It is estimated that Facebook has 139 million users a month in Africa in 2018, 98% of whom connected via mobile. The figures released by the world’s largest social media network confirm trends that Africa is a mobile-only continent. Facebook hasn’t released figures for a number of years, and hasn’t broken down these figures by country nor any other demographic.
According to UNICEF’s Generation 2030/Africa Report, Africa has a population of 1.1 billion but it is expected to rise to 4 billion by the end of this century. In this regard, research carried out by Facebook has shown that four in 10 of the world’s people are users of Facebook in African. And this number is increasing tremendously.
As of now, Facebook launched Marketplace in four African countries, namely South Africa, Egypt, Algeria and Morocco.
Currently, information technological innovation in Africa has taken unprecedented step. This innovation is seen as a springboard to solving a myriad of problems, especially those that the continent faces.
Like the West, African countries are committed to regulating data privacy, and thus any privacy breach results into punitive sanctions. For Example, in Rwanda, the law n°24/2016 of 18/06/2016 governing information and communication technologies, by and large, regulates ICT.
Particularly in the case of Facebook, as earlier noted, the foregoing ICT law, under Article 190, paragraph 5, the Internet Service Providers (ISPs)—including social media and phone companies—are under obligation to remove or disable access to the electronic record, which is potentially questionable, it has stored upon receiving a ‘take-down notice’.
What is ‘take-down notice’? It is a process operated by online hosts in response to court orders or allegations that content is illegal. Content is removed by the host following notice. Notice and take down is widely operated in relation to defamation and/or libel and other illegal content.
When the hosting company or platform receives a notice, it usually removes or blocks access to the infringed material to avoid incurring liability for it.
Major platforms such as eBay, Facebook, Twitter, Instagram, Pinterest or YouTube provide standard web-based forms for the submitting the notice.
These forms are straightforward to complete, but it is important that all the requested information is provided to reduce the chances of having your notice rejected.
As such, social media, like Facebook, can be notified of any objectionable or harmful content to remove or disable access to that content as expeditiously as possible. However, like in many jurisdictions, Rwandan ICT law does not require the host provider, or ISP, to monitor generally information which it illegally stored in its wires.
Given that data privacy is an integral part of fundamental rights, the regulatory and legal landscape surrounding the use of data, and data privacy, is rapidly becoming more complex.
It’s quite imperative to regulate the use of social media, especially Facebook.
The writer is a law expert.
The views expressed in this article are of the author.