2004
Volume 4, Issue 1
  • ISSN: 2665-9085
  • E-ISSN: 2665-9085

Abstract

Abstract

As audiences have moved to digital media, so too have governments around the world. While previous research has focused on how authoritarian regimes employ strategies such as the use of fabricated accounts and content to boost their reach, this paper reveals two different tactics the Chinese government uses on Douyin, the Chinese version of the video-sharing platform TikTok, to compete for audience attention. We use a multi-modal approach that combines analysis of video, text, and meta-data to examine a novel dataset of Douyin videos. We find that a large share of trending videos are produced by accounts affiliated with the Chinese government. These videos contain visual characteristics designed to maximize attention such as high levels of brightness and entropy and very short duration, and are more visually similar to content produced by celebrities and ordinary users than to content from non-official media accounts. We also find that the majority of videos produced by regime-affiliated accounts do not fit traditional definitions of propaganda but rather contain stories and topics unrelated to any aspect of the government, the Chinese Communist Party, policies, or politics.

Loading

Article metrics loading...

/content/journals/10.5117/CCR2022.2.002.LU
2022-02-01
2022-05-21
Loading full text...

Full text loading...

/deliver/fulltext/26659085/4/1/CCR2022.1.002.LU.html?itemId=/content/journals/10.5117/CCR2022.2.002.LU&mimeType=html&fmt=ahah

References

  1. Bakhshi, S., Shamma, D. A., Kennedy, L., & Gilbert, E. (2015). Why we filter our photos and how it impacts engagement. ICWSM, 12–21.
    [Google Scholar]
  2. Bakir, V., Herring, E., Miller, D., & Robinson, P. (2019). Organized persuasive communication: A new conceptual framework for research on public relations, propaganda and promotional culture. Critical Sociology, 45 (3), 311–328.
    [Google Scholar]
  3. Baldevbhai, P. J., & Anand, R. (2012). Color image segmentation for medical images using l* a* b* color space. IOSR Journal of Electronics and Communication Engineering, 1 (2), 24–45.
    [Google Scholar]
  4. Barry, A. M. (1997). Visual intelligence: Perception, image, and manipulation in visual communication. SUNY Press.
    [Google Scholar]
  5. Bjola, C. (2017). Propaganda in the digital age. Global Affairs, 3 (3).
    [Google Scholar]
  6. Blei, D. M. (2012). Probabilistic topic models. Communications of the ACM, 55 (4), 77–84.
    [Google Scholar]
  7. Boussalis, C., & Coan, T. G. (2020). Facing the electorate: Computational approaches to the study of nonverbal communication and voter impression formation. Political Communication, 1–23.
    [Google Scholar]
  8. Bradski, G. (2000). The OpenCV Library. Dr. Dobb’s Journal of Software Tools.
    [Google Scholar]
  9. Brinberg, M., Ram, N., Yang, X., Cho, M.-J., Sundar, S. S., Robinson, T. N., & Reeves, B. (2021). The idiosyncrasies of everyday digital lives: Using the human screenome project to study user behavior on smartphones. Computers in Human Behavior, 114, 106570.
    [Google Scholar]
  10. Camgöz, N., Yener, C., & Güvenç, D. (2004). Effects of hue, saturation, and brightness: Part 2: Attention. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 29 (1), 20–28.
    [Google Scholar]
  11. Chen, Q., Min, C., Zhang, W., Ma, X., & Evans, R. (2021). Factors driving citizen engagement with government TikTok accounts during the Covid-19 pandemic: Model development and analysis. Journal of Medical Internet Research,23 (2), e21463.
    [Google Scholar]
  12. Chen, X., Kaye, D., & Zeng, J. (2020). # Positiveenergy douyin: Constructing “playful patriotism” in a chinese short-video application. Chinese Journal of Communication, 1–21.
    [Google Scholar]
  13. Chen, Y., Wang, S., Zhang, W., & Huang, Q. (2018). Less is more: Picking informative frames for video captioning. Proceedings of the European Conference on Computer Vision (ECCV), 358–373.
    [Google Scholar]
  14. CNNIC. (2020). The 46th China statistical report on internet development.
  15. CNNIC. (2021). The 47th China statistical report on internet development.
  16. Confessore, N., & Wakabayashi, D. (2017). How Russia harvested American rage to reshape US politics. New York Times.
    [Google Scholar]
  17. Dahmen, N. S. (2012). Photographic framing in the stem cell debate: Integrating eye-tracking data for a new dimension of media effects research. American Behavioral Scientist,56 (2), 189–203.
    [Google Scholar]
  18. Earl, J., Maher, T., & Pan, J. (2022). The digital repression of social movements, protest, and activism: A synthetic review.Science Advances. Forthcoming.
    [Google Scholar]
  19. Edwardson, M., Grooms, D., & Proudlove, S. (1981). Television news information gain from interesting video vs. talking heads. Journal of Broadcasting & Electronic Media, 25 (1), 15–24.
    [Google Scholar]
  20. Egusa, H. (1982). Effect of brightness on perceived distance as a figure—ground phenomenon. Perception, 11 (6), 671–676.
    [Google Scholar]
  21. Elliot, A. J., & Maier, M. A. (2014). Color psychology: Effects of perceiving color on psychological functioning in humans. Annual Review of Psychology, 65, 95–120.
    [Google Scholar]
  22. Feichtenhofer, C., Fan, H., Malik, J., & He, K. (2019). Slowfast networks for video recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision, 6202–6211.
    [Google Scholar]
  23. Geitgey, A. (2020). Ageitgey/face_recognition: The world’s simplest facial recognition api for python and the command line. https://github.com/ageitgey/face%5C_recognition
    [Google Scholar]
  24. Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2016). The structural virality of online diffusion. Management Science, 62 (1), 180–196.
    [Google Scholar]
  25. Goethe, J. W. v., & Eastlake, C. L. (2006). Theory of colours. Dover.
    [Google Scholar]
  26. Graber, D. A. (1990). Seeing is remembering: How visuals contribute to learning from television news. Journal of Communication, 40 (3), 134–155.
    [Google Scholar]
  27. Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21 (3), 267–297.
    [Google Scholar]
  28. Guo, P. J., Kim, J., & Rubin, R. (2014). How video production affects student engagement: An empirical study of mooc videos. Proceedings of the First ACM Conference on [email protected] Scale Conference, 41–50.
    [Google Scholar]
  29. Hameleers, M., Powell, T. E., Van Der Meer, T. G., & Bos, L. (2020). A picture paints a thousand lies? The effects and mechanisms of multimodal disinformation and rebuttals disseminated via social media. Political Communication, 37 (2), 281–301.
    [Google Scholar]
  30. Hoogeveen, M. (1997). Toward a theory of the effectiveness of multimedia systems. International Journal of Human-Computer Interaction, 9 (2), 151–168.
    [Google Scholar]
  31. Huang, H. (2015). Propaganda as signaling. Comparative Politics, 47 (4), 419–444.
    [Google Scholar]
  32. Iyer, A., & Oldmeadow, J. (2006). Picture this: Emotional and political responses to photographs of the Kenneth Bigley kidnapping. European Journal of Social Psychology, 36 (5), 635–647.
    [Google Scholar]
  33. Ji, S., Xu, W., Yang, M., & Yu, K. (2012). 3D convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (1), 221–231.
    [Google Scholar]
  34. Joo, J., Li, W., Steen, F. F., & Zhu, S.-C. (2014). Visual persuasion: Inferring communicative intents of images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 216–223.
    [Google Scholar]
  35. Joo, J., Steen, F. F., & Zhu, S.-C. (2015). Automated facial trait judgment and election outcome prediction: Social dimensions of face. Proceedings of the IEEE International Conference on Computer Vision, 3712–3720.
    [Google Scholar]
  36. Joo, J., & Steinert-Threlkeld, Z. C. (2018). Image as data: Automated visual content analysis for political science. arXiv preprint arXiv:1810.01544.
    [Google Scholar]
  37. Jowett, G. S., & O’donnell, V. (2018). Propaganda & persuasion. Sage Publications.
    [Google Scholar]
  38. Keller, F. B., Schoch, D., Stier, S., & Yang, J. (2020). Political astroturfing on Twitter: How to coordinate a disinformation campaign. Political Communication, 37 (2), 256–280.
    [Google Scholar]
  39. King, D. (2017). Dlib c++ library: High quality face recognition with deep metric learning. Recognition Processing Time (one server vs. three servers).
    [Google Scholar]
  40. King, G., Pan, J., & Roberts, M. E. (2013). How censorship in China allows government criticism but silences collective expression. American Political Science Review, 107 (2), 326–343.
    [Google Scholar]
  41. King, G., Pan, J., & Roberts, M. E. (2017). How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. American Political Science Review, 111 (3), 484–501.
    [Google Scholar]
  42. Kizilcec, R. F., Papadopoulos, K., & Sritanyaratana, L. (2014). Showing face in video instruction: Effects on information retention, visual attention, and affect. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2095–2102.
    [Google Scholar]
  43. Li, Y., & Xie, Y. (2020). Is a picture worth a thousand words? An empirical study of image content and social media engagement. Journal of Marketing Research, 57 (1), 1–19.
    [Google Scholar]
  44. Lu, Y., & Pan, J. (2020). Capturing clicks: How the Chinese government uses clickbait to compete for visibility. Political Communication, 1–32. https://doi.org/10.1080/10584609.2020.1765914
    [Google Scholar]
  45. Marmolin, H. (1992). Multimedia from the perspectives of psychology. Multimedia, 39–52.
    [Google Scholar]
  46. Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. New York: Data & Society Research Institute.
    [Google Scholar]
  47. McLeod, D. M. (1995). Communicating deviance: The effects of television news coverage of social protest. Journal of Broadcasting & Electronic Media, 39 (1), 4–19.
    [Google Scholar]
  48. Mebane Jr., W. R., Wu, P., Woods, L., Klaver, J., Pineda, A., & Miller, B. (2018). Observing election incidents in the United States via Twitter: Does who observes matter?Annual Meeting of the Midwest Political Science Association, Chicago.
    [Google Scholar]
  49. Messaris, P., & Abraham, L. (2001). The role of images in framing news stories. Framing public life: Perspectives on media and our understanding of the social world, 215–226. Routledge: Londong.
    [Google Scholar]
  50. Newhagen, J. E., & Reeves, B. (1992). The evening’s bad news: Effects of compelling negative television news images on memory. Journal of Communication, 42 (2), 25–41.
    [Google Scholar]
  51. Nyhuis, D., Ringwald, T., Rittmann, O., Gschwend, T., & Stiefelhagen, R. (2021). Automated video analysis for social science research. Handbook of Computational Social Science, Volume 2: Data Science, Statistical Modelling, and Machine Learning Methods. Routledge: London.
    [Google Scholar]
  52. Pan, J., Shao, Z., & Xu, Y. (2021). How government-controlled media shifts policy attitudes through framing. Political Science Research and Methods.
    [Google Scholar]
  53. Pan, J., & Siegel, A. (2020). How Saudi crackdowns fail to silence online dissent. American Political Science Review, 114 (1), 109–125.
    [Google Scholar]
  54. Pancer, E., & Poole, M. (2016). The popularity and virality of political social media: Hashtags, mentions, and links predict likes and retweets of 2016 US presidential nominees’ tweets. Social Influence, 11 (4), 259–270.
    [Google Scholar]
  55. Peng, Y. (2018). Same candidates, different faces: Uncovering media bias in visual portrayals of presidential candidates with computer vision. Journal of Communication, 68 (5), 920–941.
    [Google Scholar]
  56. Peng, Y., & Jemmott, J., III. (2018). Feast for the eyes: Effects of food perceptions and computer vision features on food photo popularity. International Journal of Communication, 12, 313–336.
    [Google Scholar]
  57. Pieters, R., & Wedel, M. (2004). Attention capture and transfer in advertising: Brand, pictorial, and text-size effects. Journal of Marketing, 68 (2), 36–50.
    [Google Scholar]
  58. Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2015). A clearer picture: The contribution of visuals and text to framing effects. Journal of Communication, 65 (6), 997–1017.
    [Google Scholar]
  59. Powell, T. E., Boomgaarden, H. G., De Swert, K., & de Vreese, C. H. (2019). Framing fast and slow: A dual processing account of multimodal framing effects. Media Psychology, 22 (4), 572–600.
    [Google Scholar]
  60. Qian, J., Zhang, K., Wang, K., Li, J., & Lei, Q. (2018). Saturation and brightness modulate the effect of depth on visual working memory. Journal of Vision, 18 (9), 16.
    [Google Scholar]
  61. Qin, B., Strömberg, D., & Wu, Y. (2018). Media bias in China. American Economic Review, 108 (9), 244224–76.
    [Google Scholar]
  62. Reese, S. D. (1984). Visual-verbal redundancy effects on television news learning. Journal of Broadcasting & Electronic Media, 28 (1), 79–87.
    [Google Scholar]
  63. Reeves, B., Ram, N., Robinson, T. N., Cummings, J. J., Giles, C. L., Pan, J., Chiatti, A., Cho, M., Roehrick, K., Yang, X., et al. (2019). Screenomics: A framework to capture and analyze personal life experiences and the ways that technology shapes them. Human–Computer Interaction, 1–52.
    [Google Scholar]
  64. Roberts, M. E. (2018). Censored: Distraction and diversion inside China’s great firewall. Princeton University Press.
    [Google Scholar]
  65. Roberts, M. E., Stewart, B. M., Tingley, D., Lucas, C., Leder-Luis, J., Gadarian, S. K., Albertson, B., & Rand, D. G. (2014). Structural topic models for open-ended survey responses. American Journal of Political Science, 58 (4), 1064–1082.
    [Google Scholar]
  66. Rosenholtz, R., Li, Y., Mansfield, J., & Jin, Z. (2005). Feature congestion: A measure of display clutter. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 761–770.
    [Google Scholar]
  67. Sanovich, S. (2017). Computational propaganda in Russia: The origins of digital misinformation. Computational Propaganda Research Project. Working paper, (3).
    [Google Scholar]
  68. Sethna, J. et al. (2006). Statistical mechanics: Entropy, order parameters, and complexity (Vol. 14). Oxford University Press.
    [Google Scholar]
  69. Simonyan, K., & Zisserman, A. (2014). Two-stream convolutional networks for action recognition in videos. Advances in neural information processing systems, 568–576.
    [Google Scholar]
  70. Sobolev, A., Chen, M. K., Joo, J., Steinert-Threlkeld, Z. C., et al. (2020). News and geolocated social media accurately measure protest size variation. American Political Science Review, 114 (4), 1343–1351.
    [Google Scholar]
  71. Steinert-Threlkeld, Z. (2019). The future of event data is images. Sociological Methodology, 49 (1), 68–75.
    [Google Scholar]
  72. Stenberg, G. (2006). Conceptual and perceptual factors in the picture superiority effect. European Journal of Cognitive Psychology, 18 (6), 813–847.
    [Google Scholar]
  73. Stockmann, D. (2013). Media commercialization and authoritarian rule in China. Cambridge University Press.
    [Google Scholar]
  74. Sundar, S. S. (2000). Multimedia effects on processing and perception of online news: A study of picture, audio, and video downloads. Journalism & Mass Communication Quarterly, 77 (3), 480–499.
    [Google Scholar]
  75. Sundar, S. S. (2008). The main model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media; Learning Initiative.
    [Google Scholar]
  76. Sundar, S. S., & Limperos, A. M. (2013). Uses and grats 2.0: New gratifications for new media. Journal of Broadcasting & Electronic Media, 57 (4), 504–525.
    [Google Scholar]
  77. Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3D convolutional networks. Proceedings of the IEEE International Conference on Computer Vision, 4489–4497.
    [Google Scholar]
  78. Triggs, F. (2019). The ideological function of “positive energy” discourse: A people’s daily analysis. British Journal of Chinese Studies, 9 (2), 83–112.
    [Google Scholar]
  79. Van De Weijer, J., Schmid, C., & Verbeek, J. (2007). Learning color names from real-world images. 2007 IEEE Conference on Computer Vision and Pattern Recognition, 1–8.
    [Google Scholar]
  80. Williams, N. W., Casas, A., & Wilkerson, J. D. (2020). Images as data for social science research: An introduction to convolutional neural nets for image classification. Cambridge University Press.
    [Google Scholar]
  81. Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide. Working Paper, (11. Oxford, UK), Projecton Computational Propaganda.
    [Google Scholar]
  82. Xi, N., Ma, D., Liou, M., Steinert-Threlkeld, Z. C., Anastasopoulos, J., & Joo, J. (2020). Understanding the political ideology of legislators from social media images. Proceedings of the International AAAI Conference on Web and Social Media, 14, 726–737.
    [Google Scholar]
  83. Xie, S., Sun, C., Huang, J., Tu, Z., & Murphy, K. (2018). Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. Proceedings of the European Conference on Computer Vision (ECCV), 305–321.
    [Google Scholar]
  84. Yang, P., & Tang, L. (2018). “Positive energy”: Hegemonic intervention and online media discourse in China’s Xi Jinping era. China: An International Journal, 16 (1), 1–22.
    [Google Scholar]
  85. Yang, T., & Peng, Y. (2020). The importance of trending topics in the gatekeeping of social media news engagement: A natural experiment on Weibo. Communication Research, 0093650220933729.
    [Google Scholar]
  86. Yang, X., Ram, N., Robinson, T., & Reeves, B. (2019). Using screenshots to predict task switching on smartphones. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6.
    [Google Scholar]
  87. Zhang, H., & Pan, J. (2019). Casm: A deep-learning approach for identifying collective action events with text and image data from social media. Sociological Methodology, 49 (1), 1–57.
    [Google Scholar]
  88. Zhao, Y. (1998). Media, market, and democracy in China: Between the party line and the bottom line (Vol. 139). University of Illinois Press.
    [Google Scholar]
  89. Zhao, Y., & Guo, Z. (2005). Television in China: History, political economy, and ideology. A Companion to Television. Wiley: New York:, 521–539.
    [Google Scholar]
  90. Zheng, X. S., Chakraborty, I., Lin, J. J.-W., & Rauschenberger, R. (2009). Correlating low-level image statistics with users-rapid aesthetic and affective judgments of web pages. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10.
    [Google Scholar]
  91. Zhu, C., Xu, X., Zhang, W., Chen, J., & Evans, R. (2020). How health communication via Tik tok makes a difference: A content analysis of Tik tok accounts run by Chinese provincial health committees. International Journal of Environmental Research and Public Health, 17 (1), 192.
    [Google Scholar]
  92. Zillmann, D., Knobloch, S., & Yu, H.-s. (2001). Effects of photographs on the selective reading of news reports. Media Psychology, 3 (4), 301–324.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.5117/CCR2022.2.002.LU
Loading
/content/journals/10.5117/CCR2022.2.002.LU
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error