South Korea, despite extensive online censorship, has struggled to combat misinformation while protecting free speech. In 2021, the ruling Democratic Party (DP) passed several laws to curb the circulation of inaccurate information, including setting a five-year sentence for spreading false information about the Gwangju Uprising. Opponents claim these efforts limit free speech and target conservative viewpoints. But supporters have proceeded with a proposed bill that would have provided a means to seek compensation from news organizations for inaccurate reporting. Scholars have already called for President-Elect Yoon Suk-yeol to strengthen free speech protections, including amending an antiquated National Security Law that restricts information and punishes speech deemed harmful.

Meanwhile, social media companies themselves increasingly have acted against misinformation, despite some previously emphasizing a standard of content neutrality. Social media is uniquely vulnerable to misinformation due to selective attention of many users, who often only read the headlines of posts and engage posts that conform to their ideological beliefs regardless of accuracy. Evidence from South Korea finds such inaccurate information can have a long afterlife. Despite the attempts of social media companies to institute policies that address misinformation, whether the public supports such efforts receives little attention. It’s worth studying. For example, the public may see efforts such as the removal of posts as unacceptable, despite companies, as private entities, having the right to moderate content. Likewise, without public buy-in, efforts to limit misinformation are likely to fail, especially if users shift to other platforms with less moderation.

To address South Korean public perceptions, I ran a national web survey with 1,170 respondents via Macromill Embrain on March 11-16, 2022, using gender, age, and regional quota sampling.

First, I asked respondents to evaluate the statement “Social media companies have a right to remove posts on their platform” on a five-point Likert scale (strongly disagree to strongly agree). Here roughly a third of respondents agreed with the statement (34.33%), with the most common answer being neither (42.73%). Similar patterns emerge among the two largest parties, the liberal Democratic Party (DP) and conservative People Power Party (PPP). This contrasted similar survey work conducted in the U.S. in 2021, where 74.34% of Democrats agreed with the statement compared to only 43.47% of Republicans.

Second, the survey shows clear support for social media companies responding to false information, higher than support for the government responding. Overall, 81.41% of respondents agreed with the statement “Social media companies should take steps to restrict false information online, even if it limits freedom of information,” which is 5.64% higher than support for the government to take similar steps. While DP supporters were more likely to agree with both statements than PPP supporters, the same general pattern endures. I also asked, “Have you sought out sources that fact-check social media posts?” as broad support would suggest a belief that such efforts are not ideologically driven and a willingness to critique information received. The survey shows 56.91% of respondents had sought out a fact-check, with similar rates among PPP supporters (57.3%) and slightly higher rates among the DP (61.83%).


The findings so far would suggest that the public understands the threat posed by misinformation and is amenable to social media companies’ efforts to curb such influence.But a closer look suggests that this support may only be abstract, as the survey finds little consensus on specific policies that social media companies have taken or have been suggested as means to curtail misinformation. I asked about five proposed means and found none of the received majority support, with the highest being providing factual information directly under posts labeled as misinformation—supported by only 40.2% of respondents. Interestingly, the findings show somewhat lower support compared to results from the U.S. survey, although we see less consistency across the two main parties.


The results suggest a continued challenge in finding broadly accepted efforts at combating misinformation—despite a general consensus among an otherwise polarized electorate on the value of responding. This is likely due to underlying concerns about how social media companies will enforce policies and their potential for misuse, from algorithms that will label or remove posts with no means of recourse to beliefs that efforts are not value-neutral but ideologically driven.

To get broader support, policies need to be transparent and avoid the appearance of only targeting misinformation on one side of the ideological spectrum, a concern that has undermined efforts in the U.S., where Republicans frequently view efforts as only targeting conservative posts. Social media companies may also wish to pair moderating efforts such as labeling posts as potentially misinformation with nudging efforts that direct users to evaluate sources on their own. For example, Twitter and Facebook have included nudges that ask users if they want to read articles before sharing them, which may aid in equipping users with greater agency. Social media outlets may also wish to coordinate with mainstream newspaper outlets in South Korea to provide fact checking that crosses ideological divides and reduces hesitancy on the use of the label “misinformation.”

Funding for this survey was provided by the Institute for Humane Studies.

READ NEXT: South Korea: How Did the Pardon of Park Geun-hye Affect the Election?

TNL Editor: Nicholas Haggerty (@thenewslensintl)

If you enjoyed this article and want to receive more story updates in your news feed, please be sure to follow our Facebook.