There is no doubt that humans are a social species—we long to relate with others and feel included, and always have. In recent years, we have turned to social media to forge these connections. However, the increased accessibility to the lives of others that social media provides has also revealed a darker side of human nature: our capacity for hatred. It should be no surprise that the largest of the social media giants is a host to both unifying and polarizing forces among its users.
Facebook continues to be an outlet through which hatred is displayed. Most commonly, that hatred takes the form of political hostility and provocation. Just yesterday, multiple news sources reported that the platform was being abused in Myanmar to create divisions and spark violence against the Rohingya, a Muslim minority group targeted by military leaders (CNN). After recognizing its own involvement, the company commissioned a human rights report from Business for Social Responsibility (BSR), which pointed out Facebook’s errors, provided cultural and political context for the misuse of the platform, and offered solutions to prevent similar incidences in the future (NYTimes).
One of the major oversights Facebook made was failing to acknowledge Myanmar’s unfamiliarity with new digital media and uncensored speech. According to the BSR report, many citizens do not know how to use basic functions of the internet, let alone differentiate real news from fake news (NYTimes). In fact, Facebook practically serves as the internet in Myanmar, as it is already downloaded onto mobile phones when they are sold (NYTimes). This renders audiences more susceptible to propaganda and false information. The 2020 parliamentary elections also pose a threat, since Facebook could be manipulated to incite more violence and spread hateful media once again (CNN).
The BSR has suggested that Facebook more closely monitor and regulate content posted by users, publish data concerning the progress it makes, and increase its involvement with Myanmar’s society and government officials (NYTimes). Facebook already terminated the accounts of twenty organizations and individuals as of August 2018, and CEO Mark Zuckerberg promised that the company would strive to eliminate hate speech, include content filters, and hire additional Burmese language moderators to survey the issue (CNN). Despite these efforts to improve its response to online abuse, Facebook is still being criticized for not taking a direct enough approach (CNN).
Like all social media sites, Facebook should not be considered the source of hatred. Rather, it facilitates the circulation of preexisting hateful ideas, which are generated by people, not technology. The phenomenon of online attacks is not specific to Facebook by any means—another social media platform could have been used instead. Here, it was chiefly Facebook’s widespread popularity that resulted in its downfall.
Works Cited
McKirdy, Euan. “Facebook: We didn’t do enough to prevent Myanmar violence.” CNN, 6 Nov. 2018, https://www.cnn.com/2018/11/06/tech/facebook-myanmar-report/index.html
Stevenson, Alexandra. “Facebook Admits It Was Used to Incite Violence in Myanmar.” The New York Times, 6 Nov. 2018, https://www.nytimes.com/2018/11/06/technology/myanmar-facebook.html?rref=collection%2Ftimestopic%2FSocial%20Media&action=click&contentCollection=timestopics®ion=stream&module=stream_unit&version=latest&contentPlacement=3&pgtype=collection
I totally agree with your idea that you said “Facebook should not be considered the source of hatred”. I also said that in my blog post which is The internet has become a space for people to express hatred and bigotry and all the discussions now only have two sides and extreme wings: one can either agree or disagree, the middle-ground is gradually disappearing. The main responsble for this phonemon are us, those social media users and in order to improve online enviroment, those social media company should really consider to monitor users’s content that they published.