Back to top Back to top
  • close

Blog

Less Moderation is the New Meta, Not the New Law

As society is becoming more polarized, social media is becoming less moderated. The result could be a wave of discord in workplaces and other organizations, as changes online push the boundaries of what some feel is acceptable expression offline as well. Meta CEO Mark Zuckerberg recently announced that the company would take steps to “restore free expression” starting in the United States. Among the changes Meta announced were:

  • getting rid of their third-party fact checkers, citing that “fact-checkers have been too politically biased”, and
  • getting rid of restrictions on “topics like immigration and gender” because their inclusion efforts have “shut down opinions and shut out people with different ideas.”

 There has been some concern about the changes online. Critics fear that hateful conduct will find a home on Meta owned Facebook and Instagram.

Twitter has gone through similar changes. Elon Musk bought Twitter (now X) saying he wanted it to become a platform for free speech before firing a significant amount of the moderation team. Users have since reported the discourse on the platform changed to include hateful content.

Meta’s changes coincide with a time of increased tension and potential polarization in Canada, with an upcoming federal election and an incoming American administration increasingly focusing its rhetoric on Canada. Amid these challenges, more lax moderation on mainstream social media could platform and elevate fringe voices and ideologies, making them more commonplace.

Organizations should be prepared for this online discourse to impact them internally. More conversations are happening online, and online culture’s influence is increasingly prevalent offline.

Meta is just one of several Fortune 500 companies that have announced their intention to dial back their inclusion efforts in recent months. But the changes made by these companies do not change the law. Free expression is the cornerstone of many democracies, but there has always been a need for limits that is recognized by law. If discriminatory expression exists on mainstream social media platforms, it will find its way into schools, workplaces, and other organizations to respond to. And if they fail to respond appropriately, they may face liability for discrimination or harassment.

Indeed, employers may face liability for the online activities of employees. Last year, Ontario amended its workplace harassment definition to include online conduct. This effectively requires employers to investigate and address allegations of workplace-related harassment that happen virtually, including through social media. And under the Ontario Human Rights Code, employers are deemed liable for discrimination employees face in their employment, even if the discrimination is on social media or by someone who isn’t a “directing mind” of the organization.  

Regardless of the legal implications, employers and other organizations have been grappling with the consequences and associated challenges of attempting to deal with divisive discourse that is increasingly online having an impact offline. Workplaces, campuses, governments of all levels, and professional associations have seemingly become battlegrounds in what is sometimes called the culture war. This results in dysfunction within organizations which depend on the ability of individuals to get along respectfully with others from different tribes.

Organizations can prepare for these challenges by setting standards, making them known to members, and upholding them.

They can begin by updating their policies to ensure they reflect their intentions and are consistent with their legal obligations. Employers in Ontario should update their legally required workplace harassment policies to include the changes recognizing online harassment. Social media policies, while not required by law, are essential for organizations to guard against the risks of being associated with content that they do not want to reflect upon them, especially given the relaxed moderation by the platforms themselves. 

Training allows individuals to learn about the practical implications of policies and provides an opportunity to educate against ignorance or prejudice that is often spread online.

When discriminatory or harassing behaviour makes its way into the workplace, organizations should respond swiftly, using investigations, individual sensitivity training, or alternative dispute resolution methods to enforce the organization’s standards of behaviour. Ideally, employees and community members can be held accountable in ways that are educational and restorative rather than punitive, but employers should not be afraid to use progressive discipline as appropriate. Having the right policies and conducting appropriate training both reduces the risk these incidents occur, and helps guide and justify an organization response.

Social media has never been the arbiter of acceptable or legal conduct, but its influence online and offline is undeniable. As social media platforms change their rules to allow for more content that promote discrimination and harassment, organizations will need to take steps to limit the impact of this content, or face the potential dysfunctional consequences of a divided workplace or community.