Users’ perception of media content and content moderation: Exploring antecedents of reporting harmful comments from a dual perspective

Yaoying Zhu 1, Zhuo Song 2 *
More Detail
1 Teaching Center for Writing and Communication, School of Humanities, Tsinghua University, Beijing, CHINA
2 School of Journalism and Communication, Nanjing Normal University, Nanjing, CHINA
* Corresponding Author
Online Journal of Communication and Media Technologies, Volume 15, Issue 4, Article No: e202542. https://doi.org/10.30935/ojcmt/17619
OPEN ACCESS   952 Views   264 Downloads   Published online: 22 Dec 2025
Download Full Text (PDF)

ABSTRACT

Extensive participation by users is essential for the effectiveness of content moderation. Thus, it is pivotal to understand what factors influence users’ acceptance of reporting harmful comments to the social media platform. On the basis of existing literature on the third-person effect and human-machine interaction, in the current study, we explored the antecedents of reporting harmful comments to the platform in terms of perceptions surrounding media content and content moderation from a dual “content-moderation” perspective. Through a survey of Weibo users in China (N = 500), we examined how perceived media effects, perceived human agency, and perceived justice of the reporting mechanism influence behavioral responses. The results revealed that perceived adverse media effects on others, perceived fairness and perceived transparency increased users’ engagement in content moderation. Moreover, the findings indicated that perceived human agency attenuated the relationship between perceived adverse media effects on others and reporting behavior. These insights contribute to the burgeoning field of research exploring how users perceive and interact with sociotechnical systems in the domain of user reporting. This study also innovatively integrates perceptions related to content and moderation, gaining more comprehensive understandings of reporting behavior. The current findings have practical implications for platform operators seeking to develop moderation tools for constructive discourse.

CITATION

Zhu, Y., & Song, Z. (2025). Users’ perception of media content and content moderation: Exploring antecedents of reporting harmful comments from a dual perspective. Online Journal of Communication and Media Technologies, 15(4), e202542. https://doi.org/10.30935/ojcmt/17619

REFERENCES

  • Aljasir, S. (2023). Effect of online civic intervention and online disinhibition on online hate speech among digital media users. Online Journal of Communication and Media Technologies, 13(4), Article e202344. https://doi.org/10.30935/ojcmt/13478
  • Balkin, J. M. (2018). Free speech is a triangle. Columbia Law Review, 118(7), 2011-2056.
  • Banovic, N., Yang, Z., Ramesh, A., & Liu, A. (2023). Being trustworthy is not enough: How untrustworthy artificial intelligence (AI) can deceive the end-users and gain their trust. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 1-17. https://doi.org/10.1145/3579460
  • Bhandari, A., Ozanne, M., Bazarova, N. N., & DiFranzo, D. (2021). Do you care who flagged this post? Effects of moderator visibility on bystander behavior. Journal of Computer-Mediated Communication, 26(5), 284-300. https://doi.org/10.1093/jcmc/zmab007
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ‘It’s reducing a human being to a percentage’ Perceptions of justice in algorithmic decisions [Paper presentation]. The 2018 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3173574.3173951
  • Blau, P. (2017). Exchange and power in social life. Routledge. https://doi.org/10.4324/9780203792643
  • Chen, X., Guan, T., & Yang, Y. (2025). Allocating content governance responsibility in China: Heterogeneous public attitudes toward multistakeholder involvement strategies. Policy & Internet, 17(2), Article e432. https://doi.org/10.1002/poi3.432
  • Chipidza, W., & Yan, J. (2022). The effectiveness of flagging content belonging to prominent individuals: The case of Donald Trump on Twitter. Journal of the Association for Information Science and Technology, 73(11), 1641-1658. https://doi.org/10.1002/asi.24705
  • Chung, M., & Wihbey, J. (2024). Social media regulation, third-person effect, and public views: A comparative study of the United States, the United Kingdom, South Korea, and Mexico. New Media & Society, 26(8), 4534-4553. https://doi.org/10.1177/14614448221122996
  • Chung, S., & Moon, S.-I. (2016). Is the third-person effect real? A critical examination of rationales, testing methods, and previous findings of the third-person effect on censorship attitudes. Human Communication Research, 42(2), 312-337. https://doi.org/10.1111/hcre.12078
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386-400. https://doi.org/10.1037//0021-9010.86.3.386
  • Common, M. (2019). The importance of appeals systems on social media platforms. SSRN. https://doi.org/10.2139/ssrn.3462770
  • Cover, R., Beckett, J., Brevini, B., Lumby, C., Simcock, R., & Thompson, J. D. (2025). Reporting online abuse to platforms: Factors, interfaces and the potential for care. Convergence: The International Journal of Research into New Media Technologies. https://doi.org/10.1177/13548565251324508
  • Crawford, K., & Gillespie, T. (2016). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, 18(3), 410-428. https://doi.org/10.1177/1461444814543163
  • Dahlberg, L. (2011). Re-constructing digital democracy: An outline of four ‘positions’. New Media & Society, 13(6), 855-872. https://doi.org/10.1177/1461444810389569
  • Davison, W. P. (1983). The third-person effect in communication. The Public Opinion Quarterly, 47(1), 1-15. https://doi.org/10.1086/268763
  • Dennett, D. C. (1988). Précis of the intentional stance. Behavioral and Brain Sciences, 11(3), 495-505. https://doi.org/10.1017/S0140525X00058611
  • Dias Oliva, T. (2020). Content moderation technologies: Applying human rights standards to protect freedom of expression. Human Rights Law Review, 20(4), 607-640. https://doi.org/10.1093/hrlr/ngaa032
  • Duttweiler, P. C. (1984). The internal control index: A newly developed measure of locus of control. Educational and Psychological Measurement, 44(2), 209-221. https://doi.org/10.1177/0013164484442004
  • Flynn, A., Vakhitova, Z., Wheildon, L., Harris, B., & Robards, B. (2025). Content moderation and community standards: The disconnect between policy and user experiences reporting harmful and offensive content on social media. Policy & Internet, 17(3), Article e70006. https://doi.org/10.1002/poi3.70006
  • Fuller, M. (2008). Software studies: A lexicon. MIT Press. https://doi.org/10.7551/mitpress/9780262062749.001.0001
  • Fussell, S. R., Kiesler, S., Setlock, L. D., & Yew, V. (2008). How people anthropomorphize robots [Paper presentation]. The 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction. https://doi.org/10.1145/1349822.1349842
  • Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press. https://doi.org/10.12987/9780300235029
  • Gillespie, T. (2022). Do not recommend? Reduction as a form of content moderation. Social Media + Society, 8(3). https://doi.org/10.1177/20563051221117552
  • Goanta, C., & Ortolani, P. (2022). Unpacking content moderation: The rise of social media platforms as online civil courts. SSRN. https://doi.org/10.2139/ssrn.3969360
  • Gonçalves, J., Weber, I., Masullo, G. M., Torres da Silva, M., & Hofhuis, J. (2023). Common sense or censorship: How algorithmic moderators and message type influence perceptions of online content deletion. New Media & Society, 25(10), 2595-2617. https://doi.org/10.1177/14614448211032310
  • Gorwa, R. (2018). Towards fairness, accountability, and transparency in platform governance. AoIR Selected Papers of Internet Research, 2018. https://doi.org/10.5210/spir.v2018i0.10483
  • Grimmelmann, J. (2015). The virtues of moderation. The Yale Journal of Law & Technology, 17(42), 42-109.
  • Gunther, A. C., & Storey, J. D. (2003). The influence of presumed influence. Journal of Communication, 53(2), 199-215. https://doi.org/10.1111/j.1460-2466.2003.tb02586.x
  • Guo, L., & Johnson, B. G. (2020). Third-person effect and hate speech censorship on Facebook. Social Media + Society, 6(2). https://doi.org/10.1177/2056305120923003
  • Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis, second edition: A regression-based approach. Guilford Publications.
  • Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34(1), 1-14. https://doi.org/10.1080/01972243.2017.1391913
  • Huang, G., & Wang, S. (2023). Is artificial intelligence more persuasive than humans? A meta-analysis. Journal of Communication, 73(6), 552-562. https://doi.org/10.1093/joc/jqad024
  • Jhaver, S., & Zhang, A. X. (2025). Do users want platform moderation or individual control? Examining the role of third-person effects and free speech support in shaping moderation preferences. New Media & Society, 27(5), 2930-2950. https://doi.org/10.1177/14614448231217993
  • Jhaver, S., Appling, D. S., Gilbert, E., & Bruckman, A. (2019a). “Did you suspect the post would be removed?” Understanding user reactions to content removals on Reddit. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-33. https://doi.org/10.1145/3359294
  • Jhaver, S., Bruckman, A., & Gilbert, E. (2019b). Does transparency in moderation really matter? User behavior after content removal explanations on reddit. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-27. https://doi.org/10.1145/3359252
  • Ji, Y., & Kim, S. (2020). Crisis-induced public demand for regulatory intervention in the social media era: Examining the moderating roles of perceived government controllability and consumer collective efficacy. New Media & Society, 22(6), 959-983. https://doi.org/10.1177/1461444819874473
  • Juneja, P., Rama Subramanian, D., & Mitra, T. (2020). Through the looking glass: Study of transparency in Reddit’s moderation practices. Proceedings of the ACM on Human-Computer Interaction, 4(GROUP), 1-35. https://doi.org/10.1145/3375197
  • Kalch, A., & Naab, T. K. (2017). Replying, disliking, flagging: How users engage with uncivil and impolite comments on news sites. SCM Studies in Communication and Media, 6(4), 395-419. https://doi.org/10.5771/2192-4007-2017-4-395
  • Kang, H., & Lou, C. (2022). AI agency vs. human agency: Understanding human-AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), Article zmac014. https://doi.org/10.1093/jcmc/zmac014
  • Katz, E. (2001). Lazarsfeld’s map of media effects. International Journal of Public Opinion Research, 13(3), 270-279. https://doi.org/10.1093/ijpor/13.3.270
  • Kim, H. (2016). The role of emotions and culture in the third-person effect process of news coverage of election poll results. Communication Research, 43(1), 109-130. https://doi.org/10.1177/0093650214558252
  • Kim, M. (2025). A direct and indirect effect of third-person perception of COVID-19 fake news on support for fake news regulations on social media: Investigating the role of negative emotions and political views. Mass Communication and Society, 28(2), 229-252. https://doi.org/10.1080/15205436.2023.2227601
  • Kunst, M., Porten-Cheé, P., Emmer, M., & Eilders, C. (2021). Do “good citizens” fight hate speech online? Effects of solidarity citizenship norms on user responses to hate comments. Journal of Information Technology & Politics, 18(3), 258-273. https://doi.org/10.1080/19331681.2020.1871149
  • Laapotti, T., & Raappana, M. (2022). Algorithms and organizing. Human Communication Research, 48(3), 491-515. https://doi.org/10.1093/hcr/hqac013
  • Leerssen, P. (2023). An end to shadow banning? Transparency rights in the digital services act between content moderation and curation. Computer Law & Security Review, 48, Article 105790. https://doi.org/10.1016/j.clsr.2023.105790
  • Leonhard, L., Rueß, C., Obermaier, M., & Reinemann, C. (2018). Perceiving threat and feeling responsible. How severity of hate speech, number of bystanders, and prior reactions of others affect bystanders’ intention to counterargue against hate speech on Facebook. SCM Studies in Communication and Media, 7(4), 555-579. https://doi.org/10.5771/2192-4007-2018-4-555
  • Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology, 31(4), 611-627. https://doi.org/10.1007/s13347-017-0279-x
  • Li, M. (2023). Promote diligently and censor politely: How Sina Weibo intervenes in online activism in China. Information, Communication & Society, 26(4), 730-745. https://doi.org/10.1080/1369118X.2021.1983001
  • Lim, J. S., & Golan, G. J. (2011). Social media activism in response to the influence of political parody videos on YouTube. Communication Research, 38(5), 710-727. https://doi.org/10.1177/0093650211405649
  • Lim, J. S., Lee, C., Kim, J., & Zhang, J. (2025). Influence of COVID-19 vaccine misinformation beliefs on the third-person effect: Implications for social media content moderation and corrective action. Online Information Review, 49(3), 497-516. https://doi.org/10.1108/OIR-04-2024-0220
  • Lind, E. A., & Tyler, T. R. (2013). The social psychology of procedural justice. Springer.
  • Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human-AI interaction. Journal of Computer-Mediated Communication, 26(6), 384-402. https://doi.org/10.1093/jcmc/zmab013
  • Liu, Y., Mittal, A., Yang, D., & Bruckman, A. (2022). Will AI console me when I lose my pet? Understanding perceptions of AI-mediated email writing [Paper presentation]. The 2022 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491102.3517731
  • Livingstone, S. (2003). The changing nature of audiences: From the mass audience to the interactive media user. In A. N. Valdivia (Ed.), A companion to media studies (pp. 337-359). Blackwell. https://doi.org/10.1002/9780470999066.ch17
  • Luo, C., Zhu, Y., & Chen, A. (2024). What motivates people to counter misinformation on social media? Unpacking the roles of perceived consequences, third-person perception and social media use. Online Information Review, 48(1), 105-122. https://doi.org/10.1108/OIR-09-2022-0507
  • Lyu, Y., Cai, J., Callis, A., Cotter, K., & Carroll, J. M. (2024). “I got flagged for supposed bullying, even though it was in response to someone harassing me about my disability”: A study of blind TikTokers’ content moderation experiences [Paper presentation]. The 2024 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642148
  • Manovich, L. (2002). The language of new media. University of Toronto Press. https://doi.org/10.22230/cjc.2002v27n1a1280
  • Meerson, R., Koban, K., & Matthes, J. (2025). Platform-led content moderation through the bystander lens: A systematic scoping review. Information, Communication & Society. https://doi.org/10.1080/1369118X.2025.2483836
  • Molina, M. D., & Sundar, S. S. (2022). When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication, 27(4), Article zmac010. https://doi.org/10.1093/jcmc/zmac010
  • Moorman, R. H. (1991). Relationship between organizational justice and organizational citizenship behaviors: Do fairness perceptions influence employee citizenship? Journal of Applied Psychology, 76(6), 845-855. https://doi.org/10.1037/0021-9010.76.6.845
  • Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366-4383. https://doi.org/10.1177/1461444818773059
  • Naab, T. K., Kalch, A., & Meitz, T. G. (2018). Flagging uncivil user comments: Effects of intervention information, type of victim, and response comments on bystander behavior. New Media & Society, 20(2), 777-795. https://doi.org/10.1177/1461444816670923
  • Obermaier, M. (2024). Youth on standby? Explaining adolescent and young adult bystanders’ intervention against online hate speech. New Media & Society, 26(8), 4785-4807. https://doi.org/10.1177/14614448221125417
  • Ososky, S., Philips, E., Schuster, D., & Jentsch, F. (2013). A picture is worth a thousand mental models: Evaluating human understanding of robot teammates [Paper presentation]. The Human Factors and Ergonomics Society Annual Meeting. https://doi.org/10.1177/1541931213571287
  • Pan, W., Liu, D., Meng, J., & Liu, H. (2025). Human-AI communication in initial encounters: How AI agency affects trust, liking, and chat quality evaluation. New Media & Society, 27(10), 5822-5847. https://doi.org/10.1177/14614448241259149
  • Porten-Cheé, P., Kunst, M., & Emmer, M. (2020). Online civic intervention: A new form of political participation under conditions of a disruptive online discourse. International Journal of Communication, 14, 514-534.
  • Riedl, M. J., Naab, T. K., Masullo, G. M., Jost, P., & Ziegele, M. (2021). Who is responsible for interventions against problematic comments? Comparing user attitudes in Germany and the United States. Policy & Internet, 13(3), 433-451. https://doi.org/10.1002/poi3.257
  • Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological monographs: General and Applied, 80(1), 1-28. https://doi.org/10.1037/h0092976
  • Schmid, U. K., Obermaier, M., & Rieger, D. (2024). Who cares? How personal political characteristics are related to online counteractions against hate speech. Human Communication Research, 50(3), 393-403. https://doi.org/10.1093/hcr/hqae004
  • Schwarz, O. (2019). Facebook rules: Structures of governance in digital capitalism and the control of generalized social capital. Theory, Culture & Society, 36(4), 117-141. https://doi.org/10.1177/0263276419826249
  • Seering, J., Kaufman, G., & Chancellor, S. (2022). Metaphors in moderation. New Media & Society, 24(3), 621-640. https://doi.org/10.1177/1461444820964968
  • Shim, Y., & Jhaver, S. (2024). Incorporating procedural fairness in flag submissions on social media platforms. arXiv. https://doi.org/10.48550/arXiv.2409.08498
  • Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541-565. https://doi.org/10.1080/08838151.2020.1843357
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98(9), 277-284. https://doi.org/10.1016/j.chb.2019.04.019
  • Shin, D., Zhong, B., & Biocca, F. A. (2020). Beyond user experience: What constitutes algorithmic experiences? International Journal of Information Management, 52(6), Article 102061. https://doi.org/10.1016/j.ijinfomgt.2019.102061
  • Siles, I. (2011). From online filter to web format: Articulating materiality and meaning in the early history of blogs. Social Studies of Science, 41(5), 737-758. https://doi.org/10.1177/0306312711420190
  • Siles, I. (2012). Web technologies of the self: The arising of the “blogger” identity. Journal of Computer-Mediated Communication, 17(4), 408-421. https://doi.org/10.1111/j.1083-6101.2012.01581.x
  • Siles, I., & Boczkowski, P. (2012). At the intersection of content and materiality: A texto-material perspective on the use of media technologies. Communication Theory, 22(3), 227-249. https://doi.org/10.1111/j.1468-2885.2012.01408.x
  • Šori, I., & Vehovar, V. (2022). Reported user-generated online hate speech: The ‘ecosystem’, frames, and ideologies. Social Sciences, 11(8), 375-419. https://doi.org/10.3390/socsci11080375
  • Stockinger, A., Schäfer, S., & Lecheler, S. (2025). Navigating the gray areas of content moderation: Professional moderators’ perspectives on uncivil user comments and the role of (AI-based) technological tools. New Media & Society, 27(3), 1215-1234. https://doi.org/10.1177/14614448231190901
  • Sun, Y., Oktavianus, J., Wang, S., & Lu, F. (2022). The role of influence of presumed influence and anticipated guilt in evoking social correction of COVID-19 misinformation. Health Communication, 37(11), 1368-1377. https://doi.org/10.1080/10410236.2021.1888452
  • Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74-88. https://doi.org/10.1093/jcmc/zmz026
  • Suzor, N. P. (2018). Digital constitutionalism: Using the rule of law to evaluate the legitimacy of governance by platforms. Social Media + Society, 4(3). https://doi.org/10.1177/2056305118787812
  • ter Hoeven, C. L., Stohl, C., Leonardi, P., & Stohl, M. (2021). Assessing organizational information visibility: Development and validation of the information visibility scale. Communication Research, 48(6), 895-927. https://doi.org/10.1177/0093650219877093
  • Tyler, T., Katsaros, M., Meares, T., & Venkatesh, S. (2021). Social media governance: Can social media companies motivate voluntary rule following behavior among their users? Journal of Experimental Criminology, 17(1), 109-127. https://doi.org/10.1007/s11292-019-09392-z
  • Vaccaro, K., Sandvig, C., & Karahalios, K. (2020). “At the end of the day Facebook does what it wants” How users experience contesting algorithmic content moderation. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1-22. https://doi.org/10.1145/3415238
  • van Dijck, J. (2018). The platform society: Public values in a connective world. Oxford University Press.
  • Wang, S. (2021). Moderating uncivil user comments by humans or machines? The effects of moderation agent on perceptions of bias and credibility in news content. Digital Journalism, 9(1), 64-83. https://doi.org/10.1080/21670811.2020.1851279
  • Wang, S., & Kim, K. J. (2020). Restrictive and corrective responses to uncivil user comments on news websites: The influence of presumed influence. Journal of Broadcasting & Electronic Media, 64(2), 173-192. https://doi.org/10.1080/08838151.2020.1757368
  • Wang, S., & Kim, K. J. (2023). Content moderation on social media: Does it matter who and why moderates hate speech? Cyberpsychology, Behavior, and Social Networking, 26(7), 527-534. https://doi.org/10.1089/cyber.2022.0158
  • Watson, B. R., Peng, Z., & Lewis, S. C. (2019). Who will intervene to save news comments? Deviance and social control in communities of news commenters. New Media & Society, 21(8), 1840-1858. https://doi.org/10.1177/1461444819828328
  • Wilhelm, C., Joeckel, S., & Ziegler, I. (2019). Reporting hate comments: Investigating the effects of deviance characteristics, neutralization strategies, and users’ moral orientation. Communication Research, 47(6), 921-944. https://doi.org/10.1177/0093650219855330
  • Wong, R. Y., Cheung, C. M., & Xiao, B. (2016). Combating online abuse: What drives people to use online reporting functions on social networking sites [Paper presentation]. The 2016 49th Hawaii International Conference on System Sciences. https://doi.org/10.1109/HICSS.2016.58
  • Xie, X., Shi, L., & Zhu, Y. (2023). Why netizens report harmful content online: A moderated mediation model. International Journal of Communication, 17, 5830-5851.
  • Young, G. K. (2022). How much is too much: The difficulties of social media content moderation. Information & Communications Technology Law, 31(1), 1-16. https://doi.org/10.1080/13600834.2021.1905593
  • Zhang, A. Q., Montague, K., & Jhaver, S. (2023). Cleaning up the streets: Understanding motivations, mental models, and concerns of users flagging social media posts. arXiv. https://doi.org/10.48550/arXiv.2309.06688
  • Zhao, L., & Zhang, R. (2024). Unpacking platform governance through meaningful human agency: How Chinese moderators make discretionary decisions in a dynamic network. New Media & Society, 27(12), 6472-6491. https://doi.org/10.1177/14614448241274457
  • Ziegele, M., Naab, T. K., & Jost, P. (2019). Lonely together? Identifying the determinants of collective corrective action against uncivil comments. New Media & Society, 22(5), 731-751. https://doi.org/10.1177/1461444819870130