On the 29th of September, following political pressure and the footsteps of other social media platforms, YouTube said it will remove content that spreads misinformation about all approved vaccines, expanding a ban on false claims about Covid-19 jabs, by changing its content guidelines. Within the scope of banned content are videos falsely claiming vaccines are dangerous, cause autism, cancer or infertility, and contain a tracking ingredient.
The company considers these videos inaccurate as a matter of science, setting the bar of acceptability based on consensus in the medical community. They have also announced the removal of the members of the “Disinformation Dozen,” or influencers who have been deemed by the Center for Countering Digital Hate as most responsible for proliferating such misinformation, including Robert F. Kennedy, Jr., Sherri Tenpenny, and Joseph Mercola.
However, the following topics are exceptions to the ban that will be allowed to remain on the site as long they adhere to the site’s community guidelines and the channel in question doesn’t routinely encourage “vaccine hesitancy: personal testimonies relating to vaccines; content about vaccine policies; new vaccine trials; and historical videos about vaccine successes or failures.” These topics are seen by some as necessary to the proliferation of health information collected by the public, journalists and local health officials that may reach policymakers and other key stakeholders.
The Reasons Behind the Ban
The UK Parliament published a report stating that the main source of vaccine misinformation is social media. This effect has been exacerbated by social media algorithms that rank content that we see based on engagement (and controversial vaccine videos tend to attract much attention), our existing beliefs, and advertising revenue. This ban is part of a wider policy, along with demonetising channels promoting conspiracy theories about climate change, undertaken by Youtube in the name of truth. Even if conspiracy theorists have begun migrating to other less-regulated platforms, these are less popular sites that are unlikely to spread the misinformation to those not specifically looking for it.
Social media is much less regulated than traditional media, allowing misinformation to flourish there. In the UK, the press sector is mainly voluntarily self-governed by IPSO, which uses the Editor’s Code of Practice to adjudicate complaints by victims. Social media has been exempt from these regulations and IPSO has few web-only members. Under EU regulation, audio-visual services, such as BBC, are obliged to take editorial responsibility, while social media is excluded from this obligation. As far as accuracy is concerned, the relevant directive only requires online platforms to behave responsibly, but this is more a principled provision rather than a practical one targeting specific content.
The Role of Governments
It is much easier for governments to put political pressure on social media platforms to act on the issue as the political and legal optics of government action within this sphere would be far worse. Many have protested the recent action by the major social media platforms on the basis of freedom of speech. However, the First Amendment and the Human Rights Act that protect free speech apply to government censorship of protected speech, but not to private companies such as YouTube, Facebook, or Twitter.
Moreover, even in countries with disinformation laws enacted by governments, freedom of speech is not absolute and can be restricted to protect legitimate state interests, including mitigating adverse public health effects of unvaccinated people if the restriction is necessary and proportionate. There’s also no ‘right’ to be on the platform. An example where a state has directly intervened is the Network Enforcement Act, which obliges platforms that have over two million users in Germany to provide a system of handling reports about illegal/inaccurate content.
Furthermore, it would be hard to formulate laws relating to is misinformation as there is a lack of conceptual clarity in defining the problem and a lack of agreement on what exact kinds of behavior and content are problematic. This makes misinformation hard to tackle effectively. It also underlines the inherently political nature of determining what does and does not constitute misinformation. Misinformation laws that are too broad and vague can risk stifling legitimate speech and can be used selectively by governments to encourage or oblige private companies to police speech in ways that can harm free expression and public debate.
Equally relevant is the right to health, which is enshrined in the UDHR50 and given legal force through Article 12 of the International Covenant on Economic, Social and Cultural Rights (ICESCR), which also relates to states. What follows from Article 12 is the right to look for, receive and share information and ideas about health issues. When speech relating to health issues is restricted or access to health-related information blocked, populations suffer adverse health impacts and cannot fully enjoy the right to health.