Resources

Free Speech And Social Media: Is There A Need For Government Regulation?

social media

Scrolling through social media posts can be as entertaining as people-watching in a crowded mall. If the content isn’t always riveting, the passionate conviction and absolute certitude of some of the views found there can be nothing short of startling.

In the online world, you can confidently assert that the earth is flat, the moon-landing was staged, deodorant causes cancer, there was no holocaust, and Hillary Clinton is the ringleader of a pedophile cult headquartered in a pizzeria in Washington — and, you will have followers endorsing your content.

Harmless craziness? Live and let live — it’s just part of the madcap world of modern technology, right? But what if it’s not?

One of the predominant rationales for free speech has always been that censorship is not only antithetical to free speech and causes people to distrust government, but it also can backfire and cement absurd or wicked content into stubbornly durable narratives by virtue only of ever having been “forbidden”.

Better, the theory goes, that all ideas — however bizarre, erroneous or even contemptible — be allowed into the bright light of public exposure so as to allow the citizenry to hold up to ridicule and scorn those that are despicable or otherwise unworthy.

Of course, the exception to that theory has always been speech that creates a “clear and present danger” of real harm, epitomized by Oliver Wendell Holmes’ example of shouting “Fire” in a crowded theatre.

On this basis, western governments have made laws banning certain expressions like hate, defamation, blackmail, depictions of child pornography and exhortations to violence, to name a few.

Don’t we need to re-examine some of these laws in the age of social media where the advent of “chatbots” and Artificial Intelligence (AI) are used in sophisticated and automated disinformation campaigns?

Isn’t it troublesome to allow false information to be weaponized to influence public policy on climate change, toxic pollution, human rights abuses, reproductive issues, political discourse and even irrationally dispute certified national election results?

What about affecting public health policy in a national emergency by pushing false claims of an orchestrated “plandemic” or creating (or compounding) vaccine hesitation?

How far away in time or space in the modern world, should be Holmes’ “clear and present danger”, to require regulatory oversight?

For instance, although formal, court-worthy proof of causation would be impossible, does anyone seriously doubt that statements made by the former President of the United States and a few southern Governors last year, accepted (and championed) by many Americans, actually cost some of their own lives during this pandemic?

Regrettably, research tells us that false news spreads more widely than the truth, because although chatbots and AI systems create false news (and 59% of fake news is not entirely fabricated, but contains pieces of misinformation or factually accurate information taken out of context), humans, not robots spread it.

An MIT study in 2018 found “that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude.

How serious is this problem?

An Ipsos poll conducted by CBC in 2019 surveyed over 25,000 people and found that 93% of Canadians had fallen for fake news in the previous year.

Yet another more recent study (Statistics Canada, 2021) found that 50% of respondents shared news about COVID-19 on social media without knowing its accuracy.

Another recent global survey of 100 countries (Anti-Defamation League report) found “that 32% of people who have heard of the Holocaust think that it is a myth or greatly exaggerated, including 63% in the Middle East and North Africa and 64% of Muslims in the region.”

Surprisingly, despite the mechanization of false news and its automated dissemination, social scientists tell us that “innocent dissemination,” by individuals who don’t know any better, is the single biggest contributor to the problem.

People share fake news often, apparently, because of its novelty. Real news is evidently more common, but less interesting.

The algorithms used on social media sites, designed to maximize engagement, recirculates these pseudo-news stories to people who have clicked through to similar material, creating a community of like-minded people which serves to reinforce their beliefs and simultaneously causes them to overestimate the actual relative size of their community.

Former Facebook employee and whistleblower Frances Haugen testified before a congressional committee in October of last year, with documentary evidence, that the company’s algorithms intentionally and strategically “rewarded content that provokes strong emotion in people — especially anger, because it prompts more engagement than any other emotion.” The company “. . . conducted research that found its products can cause mental health issues, allow violent content to flourish, and promote polarizing reactions — and then largely ignored that research.”

So, how important is the “right” to create, publish, or share erroneous “information”?

Should it matter if it is misinformation or disinformation (the latter of which is material known to be false)? Or are we only prevented from regulating against it because of the tired and overplayed slippery slope argument?

Are there potentially effective measures short of outright censorship? Or are we stuck with the developing societal mentality that there is no distinction between correlation and causation, that all ideas should be decided democratically and all opinions carry equal weight?

Leave a Reply

Your email address will not be published. Required fields are marked *