Social Media Regulation After the Christchurch Shooting

On April 3, 2019, Australia passed a legislation that “threatens huge fines for social media companies and jail for their executives if they fail to rapidly remove ‘abhorrent violent material’ from their platforms” (Cave, 2019). This came after a gunman distributed a hate-filled manifesto online and then used Facebook to livestream the massacre of fifty people at two mosques in Christchurch, New Zealand on March 15, 2019. The tech and media industry opposes this legislation, claiming that (1) this can lead to the censorship of legitimate speech; (2) damage Australia’s relations with other countries because it would require surveillance of all users around the world; and (3) does not address the Islamophobic motivations of the terrorist attack.

Social media companies are currently at the forefront at the issue of the spread of hate and misinformation through the internet. They present an interesting challenge in regards to culpability whenever something bad happens that involves people using it as a platform to spread hate and violence. Are social media companies responsible for how people use their platforms? To what extent? Who polices the internet? How does one effectively police the internet, when it transcends national borders? How are states involved in all of this? In an increasingly globalized world, this is an issue that must be addressed.

Sometimes, I like to think of social media platforms as like knives: they were not intended to cause harm, but there’s no avoiding the fact that it can be used in that way. Knife companies are not held responsible for murders committed by its products, therefore social media companies should not be held responsible by states for any harm that came about because its users decided to use the platform in a harmful manner. The onus to use social media responsibly is on its users. However, the nature of social media means it can facilitate a large amount of harm with little effort. With one click of a button, it can spread a lot of hate to a lot of people in a short amount of time. So, here, the knife metaphor isn’t sufficient anymore. After all, it is hard to take away an idea once it’s in someone’s head, and the spread of ideas is social media’s bread and butter. In this case, some sort of regulation is probably needed. Social media companies definitely need to be involved in this regulation, but how about states? Social media transcends national boundaries: how is social media regulated between states? What if one state wants to pass a law regarding social media that may have implications beyond its boundaries (like Australia)? This sounds a lot like concerns about MNCs and FDIs. Here, I think IGOs and NGOs would be useful. An organization can draft a set of principles or guidelines that can codify the responsibilities of a social media company, and give social media companies the opportunity to agree to these guidelines. It can give them something to strive for, something to improve on. It may not solve all problems, but it can give companies a direction and go from there.

References

Cave, Damien. (2019, April 3). Australia Passes Law to Punish Social Media Companies for Violent Posts. The New York Times. Retrieved from https://www.nytimes.com/2019/04/03/world/australia/social-media-law.html.

Leave a Reply

Your email address will not be published. Required fields are marked *