Visual Framing of Science Conspiracy Videos | Amsterdam University Press Journals Online
2004
Volume 4 Number 1
  • E-ISSN: 2665-9085

Abstract

Abstract

Recent years have witnessed an explosion of science conspiracy videos on the Internet, challenging science epistemology and public understanding of science. Scholars have started to examine the persuasion techniques used in conspiracy messages such as uncertainty and fear yet, little is understood about the visual narratives, especially how visual narratives differ in videos that debunk conspiracies versus those that propagate conspiracies. This paper addresses this gap in understanding visual framing in conspiracy videos through analyzing millions of frames from conspiracy and counter-conspiracy YouTube videos using computational methods. We found that conspiracy videos tended to use lower color variance and brightness, especially in thumbnails and earlier parts of the videos. This paper also demonstrates how researchers can integrate textual and visual features in machine learning modelsto study conspiracies on social mediaand discusses the implications of computational modeling for scholars interested in studying visual manipulation in the digital era. The analysis of visual and textual features presented in this paper could be useful for future studies focused on designing systems to identify conspiracy content on the Internet.

Loading

Article metrics loading...

/content/journals/10.5117/CCR2022.1.003.CHEN
2022-02-01
2024-04-16
Loading full text...

Full text loading...

/deliver/fulltext/26659085/4/1/CCR2022.1.003.CHEN.html?itemId=/content/journals/10.5117/CCR2022.1.003.CHEN&mimeType=html&fmt=ahah

References

  1. Au, H., Howard, P. N., & Bright, J. (2020). Social media misinformation on German Intelligence Reports. Oxford Internet Institute. https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2020/06/ComProp-Coronavirus-Misinformation-Weekly-Briefing-18-05-2020.pdf
    [Google Scholar]
  2. Afifah, F., Nasrin, S., & Mukit, A. (2018). Vehicle speed estimation using image processing. Journal of Advanced Research in Applied Mechanics, 48(1), 9–16.
    [Google Scholar]
  3. Ahmed, W., Vidal-Alaball, J., Downing, J., & López Seguí, F. (2020). COVID-19 and the 5G conspiracy theory: Social network analysis of Twitter data. Journal of Medical Internet Research, 22(5), e19458. https://doi.org/10.2196/19458
    [Google Scholar]
  4. Allgaier, J. (2019). Science and environmental communication on YouTube: Strategically distorted communications in online videos on climate change and climate engineering. Frontiers in Communication, 4(36), 1–15. https://doi.org/10.3389/fcomm.2019.00036
    [Google Scholar]
  5. Aupers, S. (2012). ‘Trust no one’: Modernization, paranoia and conspiracy culture. European Journal of Communication, 27(1), 22–34. https://doi.org/10.1177/0267323111433566
    [Google Scholar]
  6. Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1), 289–300. https://doi.org/10.1111/j.2517-6161.1995.tb02031.
    [Google Scholar]
  7. Birnbaum, M. L., Norel, R., Van Meter, A., Ali, A. F., Arenare, E., Eyigoz, E., Agurto, C., Germano, N., Kane, J. M., & Cecchi, G. A. (2020). Identifying signals associated with psychiatric illness utilizing language and images posted to Facebook. Npj Schizophrenia, 6(1), 38. https://doi.org/10.1038/s41537-020-00125-0
    [Google Scholar]
  8. Brantner, C., Lobinger, K., & Wetzstein, I. (2011). Effects of visual framing on emotional responses and evaluations of news stories about the Gaza Conflict 2009. Journalism & Mass Communication Quarterly, 88(3), 523–540. https://doi.org/10.1177/107769901108800304
    [Google Scholar]
  9. Brennen, J. S., Simon, F., Howard, P. N., & Nielsen, R. K. (2020, April7). Types, sources, and claims of COVID-19 misinformation. Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/types-sources-and-claims-covid-19-misinformation
    [Google Scholar]
  10. Bucy, E. P., & Joo, J. (2021). Editors’ introduction: Visual politics, grand collaborative programs, and the opportunity to think big. The International Journal of Press/Politics, 26(1), 5–21. https://doi.org/10.1177/1940161220970361
    [Google Scholar]
  11. Cao, J., Qi, P., Yang, T., Guo, J., & Li, J. (2020). Exploring the role of visual content in fake news detection. In K.Shu, S.Wang, D.Lee, & H.Liu (Eds.), Disinformation, Misinformation, and Fake News in Social Media (pp. 141–161). Springer.
    [Google Scholar]
  12. Caviano, J.L., & López, M.A. (2010). How color rhetoric is used to persuade: Chromatic argumentation in visual statements. Colour: Design & Creativity, 5, 1–11. http://www.colour-journal.org/2010/5/11/
    [Google Scholar]
  13. Chong, D., & Druckman, J. N. (2007). Framing theory. Annual Review of Political Science, 10(1), 103–126. https://doi.org/10.1146/annurev.polisci.10.072805.103054
    [Google Scholar]
  14. Culjak, I., Abram, D., Pribanic, T., Dzapo, H., & Cifrek, M. (2012, May). A brief introduction to OpenCV. 2012 Proceedings of the 35th International Convention MIPRO (pp. 1725–1730). IEEE.
    [Google Scholar]
  15. Derry, S. J., Pea, R. D., Barron, B., Engle, R. A., Erickson, F., Goldman, R., Hall, R., Koschmann, T., Lemke, J. L., Sherin, M. G., & Sherin, B. L. (2010). Conducting video research in the learning sciences: Guidance on selection, analysis, technology, and ethics. Journal of the Learning Sciences, 19(1), 3–53. https://doi.org/10.1080/10508400903452884
    [Google Scholar]
  16. Douglas, K. M., & Sutton, R. M. (2015). Climate change: Why the conspiracy theories are dangerous. Bulletin of the Atomic Scientists, 71(2), 98–106. https://doi.org/10.1177/0096340215571908
    [Google Scholar]
  17. DuBose, C. N., Cardello, A. V., & Maller, O. (1980). Effects of colorants and flavorants on identification, perceived flavor intensity, and hedonic quality of fruit-flavored beverages and cake. Journal of Food Science, 45(5), 1393–1399. https://doi.org/10.1111/j.1365-2621.1980.tb06562.x
    [Google Scholar]
  18. Filimonov, K., Russmann, U., & Svensson, J. (2016). Picturing the party: Instagram and party campaigning in the 2014 Swedish elections. Social Media + Society, 2(3), 1–11. https://doi.org/10.1177/2056305116662179
    [Google Scholar]
  19. Garber, L. L., Hyatt, E. M., & Starr, R. G. (2000). The effects of food color on perceived flavor. Journal of Marketing Theory and Practice, 8(4), 59–72.
    [Google Scholar]
  20. Garber, L. L., & Hyatt, E. M. (2003). Color as a tool for visual persuasion. In L.M.Scott & R.Batra (Eds.), Persuasive Imagery (pp. 313–336). Routledge.
    [Google Scholar]
  21. Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory: Undermining corrective effects. Journal of Communication, 63(4), 617–637. https://doi.org/10.1111/jcom.12038
    [Google Scholar]
  22. Geise, S. (2017). Visual framing. In P.Rössler (Ed.), The International Encyclopedia of Media Effects, (pp. 1–12). John Wiley & Sons.
    [Google Scholar]
  23. Gerend, M. A., & Sias, T. (2009). Message framing and color priming: How subtle threat cues affect persuasion. Journal of Experimental Social Psychology, 45(4), 999–1002. https://doi.org/10.1016/j.jesp.2009.04.002
    [Google Scholar]
  24. Halpern, D., Valenzuela, S., Katz, J., & Miranda, J. P. (2019). From belief in conspiracy theories to trust in others: Which factors influence exposure, believing and sharing fake news. In G.Meiselwitz (Ed.), Social Computing and Social Media. Design, Human Behavior and Analytics (Vol. 11578, pp. 217–232). Springer. https://doi.org/10.1007/978-3-030-21902-4_16
    [Google Scholar]
  25. Hasler, D., & Suesstrunk, S. E. (2003). Measuring colorfulness in natural images. In B. E.Rogowitz & T. N.Pappas (Eds.), Human vision and electronic imaging VIII (Vol. 5007, pp. 87–95). International Society for Optics and Photonics. https://doi.org/10.1117/12.477378
    [Google Scholar]
  26. Hawkins, D. M., Basak, S. C., & Mills, D. (2003). Assessing model fit by cross-validation. Journal of Chemical Information and Computer Sciences, 43(2), 579–586. https://doi.org/10.1021/ci025626i
    [Google Scholar]
  27. Hine, T. (1996). The Total Package: The evolution and secret meanings of boxes, bottles, cans, and tubes. Little, Brown.
    [Google Scholar]
  28. Hilbert, D. R. (1987). Color and color perception: A study in anthropocentric realism. Cambridge University Press.
    [Google Scholar]
  29. Hollway, W., & Jefferson, T. (2005). Panic and perjury: A psychosocial exploration of agency. British Journal of Social Psychology, 44(2), 147–163. https://doi.org/10.1348/014466604X18983
    [Google Scholar]
  30. Hussain, M. N., Tokdemir, S., Agarwal, N., & Al-Khateeb, S. (2018). Analyzing disinformation and crowd manipulation tactics on YouTube. 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (pp.1092–1095). https://doi.org/10.1109/ASONAM.2018.8508766
    [Google Scholar]
  31. Joblove, G. H., & Greenberg, D. (1978). Color spaces for computer graphics. Proceedings of the 5th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH ’78 (pp.20–25). https://doi.org/10.1145/800248.807362
    [Google Scholar]
  32. Joo, J., & Steinert-Threlkeld, Z. C. (2018). Image as data: Automated visual content analysis for political science. arXiv preprint arXiv:1810.015
    [Google Scholar]
  33. Jing, L., & Tian, Y. (2020). Self-supervised visual feature learning with deep neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(11), 4037–4058. https://doi.org/10.1109/TPAMI.2020.2992393.
    [Google Scholar]
  34. Jung, A.-K., Ross, B., & Stieglitz, S. (2020). Caution: Rumors ahead—A case study on the debunking of false information on Twitter. Big Data & Society, 7(2), 1–15. https://doi.org/10.1177/2053951720980127
    [Google Scholar]
  35. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp.1725–1732).
    [Google Scholar]
  36. Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., & Shah, M. (2021). Transformers in vision: A survey. ArXiv:2101.01169. http://arxiv.org/abs/2101.01169
    [Google Scholar]
  37. Kienhues, D., Jucks, R., & Bromme, R. (2020). Sealing the gateways for post-truthism: Reestablishing the epistemic authority of science. Educational Psychologist, 55(3), 144–154. https://doi.org/10.1080/00461520.2020.1784012
    [Google Scholar]
  38. King, A. J., & Lazard, A. J. (2020). Advancing visual health communication research to improve infodemic response. Health Communication, 35(14), 1723–1728. https://doi.org/10.1080/10410236.2020.1838094
    [Google Scholar]
  39. Koh, B., & Cui, F. (2020). Visual persuasion: An exploration of the relation between the visual attributes of thumbnails and the view-through of videos. Available at SSRN: http://dx.doi.org/10.2139/ssrn.3611735.
    [Google Scholar]
  40. Kou, Y., Gui, X., Chen, Y., & Pine, K. (2017). Conspiracy talk on social media: Collective sensemaking during a public health crisis. Proceedings of the ACM on Human-Computer Interaction, 1(pp.1–21). https://doi.org/10.1145/3134696
    [Google Scholar]
  41. Kress, G. R., & van Leeuwen, T. (1996). Reading images: The grammar of visual design. Psychology Press.
    [Google Scholar]
  42. Le, Q., & Mikolov, T. (2014, June). Distributed representations of sentences and documents. In Proceedings of the 31st International conference on machine learning (pp.1188–1196).
    [Google Scholar]
  43. McGavin, R. (n.d.). How to use the first 10 seconds of your video to hook your audience and increase views. https://www.videomaker.com/article/c05/18826-how-to-use-the-first-10-seconds-of-your-video-to-hook-your-audience-and-increase-views
    [Google Scholar]
  44. Meier, B. P., D’Agostino, P. R., Elliot, A. J., Maier, M. A., & Wilkowski, B. M. (2012). Color in context: Psychological context moderates the influence of red on approach- and avoidance-motivated behavior. PLoS ONE, 7(7), e40333. https://doi.org/10.1371/journal.pone.0040333
    [Google Scholar]
  45. Mian, A., & Khan, S. (2020). Coronavirus: The spread of misinformation. BMC Medicine, 18(1), 89. https://doi.org/10.1186/s12916-020-01556-3
    [Google Scholar]
  46. Mohammad, S. M., & Turney, P. D. (2013). Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3), 436–465. https://doi.org/10.1111/j.1467-8640.2012.00460.x
    [Google Scholar]
  47. Moore, A. (2018). Conspiracies, conspiracy theories and democracy. Political Studies Review, 16(1), 2–12. https://doi.org/10.1111/1478-9302.12102
    [Google Scholar]
  48. Murthy, V. H. (2021). Confronting health misinformation: The U.S. surgeon general's advisory on building a healthy information environment. U.S. Department of Health and Human Services.
    [Google Scholar]
  49. Neville-Shepard, R. (2018). Paranoid style and subtextual form in modern conspiracy rhetoric. Southern Communication Journal, 83(2), 119–132. https://doi.org/10.1080/1041794X.2017.1423106
    [Google Scholar]
  50. Oxman, E. (2010). Sensing the image: Roland Barthes and the affect of the visual. SubStance, 39(2), 71–90. https://doi.org/10.1353/sub.0.0083
    [Google Scholar]
  51. Peng, Y. (2018). Same candidates, different faces: Uncovering media bias in visual portrayals of presidential candidates with computer vision. Journal of Communication68(5), 920–941. doi: 10.1093/joc/jqy041.
    [Google Scholar]
  52. Peng, Y. (2021). What makes politicians’ Instagram posts popular? Analyzing social media strategies of candidates and office holders with computer vision. The International Journal of Press/Politics, 26(1), 143–166. https://doi.org/10.1177/1940161220964769
    [Google Scholar]
  53. Peng, Y., & Jemmott, J. B. (2018). Feast for the eyes: Effects of food perceptions and computer vision features on food photo popularity. International Journal of Communication, 12, 313–336.
    [Google Scholar]
  54. Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054
    [Google Scholar]
  55. Pinedo, I. C. (2004). Postmodern elements of the contemporary horror film. The Horror Film, 85–117.
    [Google Scholar]
  56. Raihani, N. J., & Bell, V. (2019). An evolutionary perspective on paranoia. Nature Human Behaviour, 3(2), 114–121. https://doi.org/10.1038/s41562-018-0495-0
    [Google Scholar]
  57. Raschka, S., Patterson, J., & Nolet, C. (2020). Machine learning in Python: Main developments and technology trends in data science, machine learning, and artificial intelligence. Information, 11(4), 193. https://doi.org/10.3390/info11040193
    [Google Scholar]
  58. Rasheed, Z., Sheikh, Y., & Shah, M. (2005). On the use of computable features for film classification. IEEE Transactions on Circuits and Systems for Video Technology, 15(1), 52–64. https://doi.org/10.1109/TCSVT.2004.839993
    [Google Scholar]
  59. Rodriguez, L., & Dimitrova, D. V. (2011). The levels of visual framing. Journal of Visual Literacy, 30(1), 48–65. https://doi.org/10.1080/23796529.2011.11674684
    [Google Scholar]
  60. Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115
    [Google Scholar]
  61. Schmitt, J. B., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Counter-messages as prevention or promotion of extremism?! The potential role of YouTube. Journal of Communication, 68(4), 780–808. https://doi.org/10.1093/joc/jqy029
    [Google Scholar]
  62. Shahsavari, S., Holur, P., Wang, T., Tangherlini, T. R., & Roychowdhury, V. (2020). Conspiracy in the time of corona: Automatic detection of emerging COVID-19 conspiracy theories in social media and the news. Journal of Computational Social Science, 3(2), 279–317. https://doi.org/10.1007/s42001-020-00086-5
    [Google Scholar]
  63. Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
    [Google Scholar]
  64. Singh, V. K., Ghosh, I., & Sonagara, D. (2021). Detecting fake news stories via multimodal analysis. Journal of the Association for Information Science and Technology, 72(1), 3–17. https://doi.org/10.1002/asi.24359
    [Google Scholar]
  65. Stemmet, C. (2012). Trust no truth: an analysis of the visual translation styles in the conspiracy film (Doctoral dissertation). https://repository.up.ac.za/bitstream/handle/2263/23938/dissertation.pdf?sequence=1
    [Google Scholar]
  66. Sze, V., Chen, Y. H., Yang, T. J., & Emer, J. S. (2017, November). Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12) (pp. 2295–2329).
    [Google Scholar]
  67. Tingley, D., & Wagner, G. (2017). Solar geoengineering and the chemtrails conspiracy on social media. Palgrave Communications, 3(1), 12. https://doi.org/10.1057/s41599-017-0014-3
    [Google Scholar]
  68. Valdez, P., & Mehrabian, A. (1994). Effects of color on emotions. Journal of Experimental Psychology: General, 123(4), 394–409. https://doi.org/10.1037/0096-3445.123.4.394
    [Google Scholar]
  69. van Prooijen, J.-W., & Douglas, K. M. (2017). Conspiracy theories as part of history: The role of societal crisis situations. Memory Studies, 10(3), 323–333. https://doi.org/10.1177/1750698017701615
    [Google Scholar]
  70. Vraga, E. K., Kim, S. C., & Cook, J. (2019). Testing logic-based and humor-based corrections for science, health, and political misinformation on social media. Journal of Broadcasting & Electronic Media, 63(3), 393–414. https://doi.org/10.1080/08838151.2019.1653102
    [Google Scholar]
  71. Vraga, E. K., Kim, S. C., Cook, J., & Bode, L. (2020). Testing the effectiveness of correction placement and type on Instagram. The International Journal of Press/Politics, 25(4), 632–652. https://doi.org/10.1177/1940161220919082
    [Google Scholar]
  72. Webster, J. G. (2014). The Marketplace of Attention: How Audiences Take Shape in a Digital Age. MIT Press.
    [Google Scholar]
  73. Wood, M. J. (2018). Propagating and debunking conspiracy theories on Twitter during the 2015–2016 Zika Virus outbreak. Cyberpsychology, Behavior, and Social Networking, 21(8), 485–490. https://doi.org/10.1089/cyber.2017.0669
    [Google Scholar]
  74. Zannettou, S., Chatzis, S., Papadamou, K., & Sirivianos, M. (2018). The Good, the bad and the bait: Detecting and characterizing clickbait on YouTube. 2018 IEEE Security and Privacy Workshops (SPW), 63–69. https://doi.org/10.1109/SPW.2018.00018
    [Google Scholar]
  75. Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The Web of false information: Rumors, fake News, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality, 11(3), 1–37. https://doi.org/10.1145/3309699
    [Google Scholar]
  76. Zhang, Q. S., & Zhu, S. C. (2018). Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.5117/CCR2022.1.003.CHEN
Loading
/content/journals/10.5117/CCR2022.1.003.CHEN
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error