As to the international standards on human rights, temporary derogations of the rights and freedoms enshrined in the relevant international documents are, broadly speaking, allowable. Yet, such derogations may be
‘justified in exceptional and most serious circumstances only’, for instance, during a state of emergency. Derogation-related provisions are included into the following international treaties: The International Covenant on Civil and Political Rights (ICCPR, article 4) and European Convention on the Protection of Human Rights and Fundamental Freedoms (ECHR, article 15). So,
clause 1 of article 15 of ECHR holds that “
in time of war or other public emergency threatening the life of the nation” generally established powers are supplemented by a row of specific limitations allowable for the purposes of national security and public order.
The Internet shutdowns are not unique in consolidated democracies. For example, in 2019, British police
shut down the public’s access to the Internet on the London underground to tackle planned protests by climate protesters.
According to the authors of the research paper ‘Internet Shutdown and the Limits of Law’, Giovanni De Gregorio and Nicole Stremlau, when shutdowns occur, they are usually met with fair criticism by free speech advocates and Internet freedom groups such as Access Now. Responses have, however, recently become more nuanced out of an increasing frustration with the slowness of social media companies to respond to online hate speech. Besides, there is a growing debate around the responsibilities of these actors. For instance, in September 2019, seventeen states and representatives of the Internet titans joined the
Christchurch Call on the cooperation to struggle against extremism and ensure online security. The treaty was executed two months after the
tragedy in Christchurch, New Zealand, where in March 2019 a criminal committed an armed attack at two mosques, bringing death to 51 people. The evildoer was live broadcasting on social networks. Despite all efforts to ban and delete such content, millions of people watched that video.
After the terrorist act in Christchurch, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron proposed initiatives to “eliminate” (if such a thing is possible) terrorist and violent extremist content online. The
International Initiative to Counter Terrorism Online made the largest technology companies and social networks listen up and start fighting extremist content.
Another example of the Internet limitation and consequences of the dissemination of fake news is associated with the take-over in Myanmar, where the military
ordered to temporarily block the access of citizens to Facebook. The website of the Ministry of Communications and Information of Myanmar informed that Facebook had been banned for ‘
the purposes of stability’ because ‘
people were disseminating fakes and disinformation, which leads to the misunderstanding among other people’.
NetBlocks, a global Internet monitor, confirmed that, MPT, being the state telecom company, and many other private providers banned the access both to Facebook and its services such as Messenger, Instagram, and WhatsApp, having been ordered to do so by the army. In response, Facebook stated that they considered the situation to be ‘
extraordinary’ and took measures ‘
to protect against harm, including, among others, deleting any content that sang praises or supported the turnover’.
Methods of disseminating data and information
define the deployment of conflicts and other violent acts, as it happened in Myanmar. The role of technology companies that provide an active support to the indirect parties of the war activities through the use of the technologies, which they have developed themselves, may cause serious damage and catastrophic humanitarian consequences. That said, digital technologies do not always serve to ensure the freedom of speech, but they can also be used in armed conflicts and contribute to the development of unparalleled means and methods of warfare.
With regard to the events in Ukraine in 2022 and ensuring sanctions against Russia, the General Prosecutor’s Office of the Russian Federation
submitted a demand to Roskomnadzor (the Federal Service for Supervision of Communications, Information Technology, and Mass Media) to
block Instagram. That demand was directly related to the
statement of Meta Platforms’ spokesman Andy Stone who said that the company would lift a ban on its social networks (Facebook and Instagram) for residents of a number of countries on posting calls for violence against Russian soldiers. At the same time, he stressed that calls for violence against Russian citizens were prohibited. Nonetheless, Tverskoi District Court of Moscow adjudged Meta to be an extremist organization and
banned its activities within the territory of Russia. Later on, Senator Andrei Klishas
clarified that Russian users of Instagram and other Meta products would not be considered extremists. It is notable that in the end of 2021, the Russian office of Meta
faced penalties in the amount of 1.9 billion rubles initiated by Roskomnadzor since the administration of the social network had not fulfilled its demands to delete prohibited content.
Referring to the practical experience of struggling against false information, our attention should be drawn to the European approach to solving disinformation-related issues. Such approach focuses on the coordination with numerous parties engaged in the counteraction against disinformation as well as on the interagency and international collaboration, enhancement of the role of the private non-governmental content monitoring, responsibility of technology companies, and media literacy improvement.