Volume 4, Issue 2

Abstract

Abstract

Ongoing research into how states coordinate foreign disinformation campaign has raised concerns over social media’s influence on democracies. One example is the spread of Russian disinformation in the 2016 US presidential election. Russia’s Internet Research Agency (IRA) Twitter accounts have been known to deliver messages with strategic attempts and political goals. We use publicly available IRA Twitter data created during and after the 2016 US election campaign (2016 and 2017) to examine the nature of strategic message features of foreign-sponsored online disinformation and their social media sharing. We use computational approaches to identify unique syntactic features of online disinformation tweets from IRA compared to American Twitter corpora, reflecting their functional and situational differences. More importantly, we examine what message features in IRA tweets across syntax, topic, and sentiment were associated with more sharing (retweets). Implications are discussed.

Loading

Article metrics loading...

/content/journals/10.5117/CCR2022.2.008.SUK
2022-10-01
2024-03-28
Loading full text...

Full text loading...

/deliver/fulltext/26659085/4/2/CCR2022.2.008.SUK.html?itemId=/content/journals/10.5117/CCR2022.2.008.SUK&mimeType=html&fmt=ahah

References

  1. Alavidze, M. (2016). The use of pronouns in political discourse. International Journal of Arts & Sciences, 9(4), 349-355.
    [Google Scholar]
  2. Alizadeh, M., Shapiro, J. N., Buntain, C., & Tucker, J. A. (2020). Content-based features predict social media influence operations. Science Advances, 6(30), 1-13. https://doi.org/10.1126/sciadv.abb5824
    [Google Scholar]
  3. Badawy, A., Addawood, A., Lerman, K., & Ferrara, E. (2019). Characterizing the 2016 Russian IRA influence campaign. Social Network Analysis and Mining, 9(1), 31-43. https://doi.org/10.1007/s13278-019-0578-6
    [Google Scholar]
  4. Bastos, M., & Farkas, J. (2019). “Donald Trump is my president!”: The Internet Research Agency propaganda machine. Social Media+ Society, 5(3). 1-13. https://doi.org/10.1177/2056305119865466
    [Google Scholar]
  5. Bene, M.(2017). Go viral on the Facebook! Interactions between candidates and followers on Facebook during the Hungarian general election campaign of 2014. Information, Communication & Society, 20(4), 513–529. https://doi.org/10.1080/1369118X.2016.1198411
    [Google Scholar]
  6. Bennett, W., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317
    [Google Scholar]
  7. Berger, J., & Milkman, K. L. (2013). Emotion and virality: What makes online content go viral?. GfK Marketing Intelligence Review, 5(1), 18-23. https://doi.org/10.2478/gfkmir-2014-0022
    [Google Scholar]
  8. Bista, K. (2009). On “Yes, We Can”: Linguistics power and possibility. Journal of English for Specific Purpose, 3(24), 34-50.
    [Google Scholar]
  9. Boutyline, A., & Willer, R. (2017). The social structure of political echo chambers: Variation in ideological homophily in online networks. Political Psychology, 38(3), 551–569. https://doi.org/10.1111/pops.12337
    [Google Scholar]
  10. Boyd, D., Golder, S., & Lotan, G. (2010). Tweet, tweet, retweet: Conversational aspects of retweeting on twitter. In 2010 43rd Hawaii International Conference on System Sciences (pp. 1-10). https://doi.org/10.1109/HICSS.2010.412
    [Google Scholar]
  11. Boyd, R., Spangher, A., Fourney, A., Nushi, B., Ranade, G., Pennebaker, J., & Horvitz, E. (2018). Characterizing the Internet Research Agency’s social media operations during the 2016 U.S. presidential election using linguistic analyses. PsyArXiv. Retrieved from http://test.adamfourney.com/papers/boyd_psyarxiv2018.pdf
    [Google Scholar]
  12. Bradshaw, S., & Howard, P. (2018). Challenging truth and trust: A global inventory of organized social media manipulation. The Computational Propaganda Project. Retrieved from http://comprop.oii.ox.ac.uk/research/cybertroops2018/
    [Google Scholar]
  13. Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16(2002), 321-357. https://doi.org/10.1613/jair.953
    [Google Scholar]
  14. Chilton, P. (2017). “The people” in populist discourse: Using neuro-cognitive linguistics to understand political meanings. Journal of Language and Politics, 16(4), 582–594. https://doi.org/10.1075/jlp.17031.chi
    [Google Scholar]
  15. Conneau, A., & Kiela, D. (2018). Senteval: An evaluation toolkit for universal sentence representations. arXiv. https://doi.org/1803.05449.
    [Google Scholar]
  16. Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised learning of universal sentence representations from natural language inference data. arXiv. https://doi.org/1705.02364
    [Google Scholar]
  17. Crawford, W., & Csomay, E. (2015). Doing corpus linguistics. London, UK: Routledge.
    [Google Scholar]
  18. Davis, S.Horváth, C., Gretry, A., & Belei, N. (2019). Say what? How the interplay of tweet readability and brand hedonism affects consumer engagement. Journal of Business Research, 100(2019), 150–164. https://doi.org/10.1016/j.jbusres.2019.01.071
    [Google Scholar]
  19. Degaetano-Ortlieb, S., Lapshinova-Koltunski, E., & Teich, E. (2012, May). Feature discovery for diachronic register analysis: A semi-automatic approach. In LREC (pp. 2786-2790).
    [Google Scholar]
  20. DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1), 74–118. https://doi.org/10.1037/0033-2909.129.1.74
    [Google Scholar]
  21. Dyagilev, K., & Yom-Tov, E. (2014). Linguistic factors associated with propagation of political opinions in twitter. Social Science Computer Review, 32(2), 195–204. https://doi.org/10.1177/0894439313506850
    [Google Scholar]
  22. Eberl, M. (2019). Trollspeak: Who do Russian trolls tweet like?JournaLIPP, 7(2019), 49–59. https://doi.org/10.5282/journalipp/4880
    [Google Scholar]
  23. Eytan, T., Benabio, J., Golla, V., Parikh, R., & Stein, S. (2011). Social media and the health system. The Permanente Journal, 15(1), 71-74.
    [Google Scholar]
  24. Fallis, D. (2009). A conceptual analysis of disinformation. InProceedings of iConference (pp. 1-8).
    [Google Scholar]
  25. Faris, R., Roberts, H., Etling, B., Bourassa, N., Zuckerman, E., & Benkler, Y. (2017). Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3019414
    [Google Scholar]
  26. Farkas, J., & Bastos, M. (2018, July). IRA propaganda on Twitter: Stoking antagonism and tweeting local news. In Proceedings of the 9th International Conference on Social Media and Society (pp. 281-285). https://doi.org/10.1145/3217804.3217929
    [Google Scholar]
  27. Fetzer, J. (2004). Disinformation: The use of false information. Minds and Machines, 14(2), 231–240. https://doi.org/10.1023/B:MIND.0000021683.28604.5b
    [Google Scholar]
  28. Freddi, M. (2005). Arguing linguistics: Corpus investigation of one functional variety of academic discourse. Journal of English for Academic Purposes, 4(1), 5-26. https://doi.org/10.1016/j.jeap.2003.09.002
    [Google Scholar]
  29. Freelon, D., & Lokot, T. (2020). Russian disinformation campaigns on Twitter target political communities across the spectrum. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-003
    [Google Scholar]
  30. Freelon, D., Bossetta, M., Wells, C., Lukito, J., Xia, Y., & Adams, K. (2020). Black trolls matter: Racial and ideological asymmetries in social media disinformation. Social Science Computer Review, 0894439320914853. https://doi.org/10.1177/0894439320914853
    [Google Scholar]
  31. Ghanem, B., Buscaldi, D., & Rosso, P. (2019). TexTrolls: Identifying Russian trolls on Twitter from a textual perspective. arXiv. https://doi.org/1910.01340.
    [Google Scholar]
  32. Giles, H. & Ogay, T. (2006). Communication accommodation theory. In B.B.Whaley & W.Samter (Eds.), Explaining Communication: Contemporary Theories and Exemplars (pp. 293-310). Mahwah, NJ: Erlbaum.
    [Google Scholar]
  33. Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Journal of Politeness Research, 6(2), 215–242. https://doi.org/10.1515/jplr.2010.011
    [Google Scholar]
  34. Heiss, R., Schmuck, D., & Matthes, J. (2019). What drives interaction in political actors’ Facebook posts? Profile and content predictors of user engagement and political actors’ reactions. Information, Communication & Society, 22(10), 1497–1513. https://doi.org/10.1080/1369118X.2018.1445273
    [Google Scholar]
  35. Henkel, L., & Mattson, M. (2011). Reading is believing: The truth effect and source credibility. Consciousness and Cognition, 20(4), 1705-1721. https://doi.org/10.1016/j.concog.2011.08.018
    [Google Scholar]
  36. Hjorth, F., & Adler-Nissen, R. (2019). Ideological asymmetry in the reach of pro-Russian digital disinformation to United States audiences. Journal of Communication, 69(2), 168–192. https://doi.org/10.1093/joc/jqz006
    [Google Scholar]
  37. Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & François, C. (2018). The IRA, social media and political polarization in the United States, 2012-2018. Project on Computational Propaganda. Retrieved from https://digitalcommons.unl.edu/senatedocs/1/
    [Google Scholar]
  38. Íñigo-Mora, I. (2004). On the use of the personal pronoun we in communities. Journal of Language and Politics, 3(1), 27-52. https://doi.org/10.1075/jlp.3.1.05ini
    [Google Scholar]
  39. Ionin, T., & Wexler, K. (2002). Optional verb raising in L2 acquisition: Evidence from L1- Russian learners of English. In Proceedings of the GALA 2001 Conference (pp. 1-9).
    [Google Scholar]
  40. Jamieson, K. (2020). Cyberwar: How Russian hackers and trolls helped elect a President what we don’t, can’t, and do know. Oxford, UK: Oxford University Press.
    [Google Scholar]
  41. Keib, K., Himelboim, I., & Han, J. (2018). Important tweets matter: Predicting retweets in the #BlackLivesMatter talk on twitter. Computers in Human Behavior, 85(2018), 106–115. https://doi.org/10.1016/j.chb.2018.03.025
    [Google Scholar]
  42. Keller, F. B., Schoch, D., Stier, S., & Yang, J. (2020). Political astroturfing on Twitter: How to coordinate a disinformation campaign. Political Communication, 37(2), 256-280. https://doi.org/10.1080/10584609.2019.1661888
    [Google Scholar]
  43. Klein, O., Spears, R., & Reicher, S. (2007). Social identity performance: Extending the strategic side of SIDE. Personality and Social Psychology Review, 11(1), 28-45. https://doi.org/10.1080/10584609.2019.1661888
    [Google Scholar]
  44. Kuiken, F., Vedder, I., Housen, A., & De Clercq, B. (2019). Variation in syntactic complexity. International Journal of Applied Linguistics, 29(2), 161-170. https://doi.org/10.1111/ijal.12255
    [Google Scholar]
  45. Lakoff, G. (2010). Moral politics: How liberals and conservatives think. Chicago, IL: University of Chicago Press.
    [Google Scholar]
  46. Linvill, D. L., & Warren, P. L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 37(4), 447–467. https://doi.org/10.1080/10584609.2020.1718257
    [Google Scholar]
  47. Liu, D., & Myers, D. (2020). The most-common phrasal verbs with their key meanings for spoken and academic written English: A corpus analysis. Language Teaching Research, 24(3),, 403-424. https://doi.org/10.1177/1362168818798384
    [Google Scholar]
  48. Lock, I., Seele, P., & Heath, R. L. (2016). Where grass has no roots: The concept of ‘shared strategic communication’ as an answer to unethical astroturf lobbying. International Journal of Strategic Communication, 10(2), 87-100. https://doi.org/10.1080/1553118X.2015.1116002
    [Google Scholar]
  49. Lowrey, T. M. (1998). The effects of syntactic complexity on advertising persuasiveness. Journal of Consumer Psychology, 7(2), 187-206. https://doi.org/10.1207/s15327663jcp0702_04
    [Google Scholar]
  50. Lu, X. (2017). Automated measurement of syntactic complexity in corpus-based L2 writing research and implications for writing assessment. Language Testing, 34(4), 493-511. https://doi.org/10.1177/0265532217710675
    [Google Scholar]
  51. Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S. J., Su, M. H., Xia, Y., Freelon, D., & Wells, C. (2020). The wolves in sheep’s clothing: How Russia’s internet research agency tweets appeared in US news as vox populi. The International Journal of Press/Politics, 25(2), 196-216. https://doi.org/10.1177/1940161219895215
    [Google Scholar]
  52. Lundberg, J., & Laitinen, M. (2020). Twitter trolls: a linguistic profile of antidemocratic discourse. Language Sciences, 79(2020), 1-14. https://doi.org/10.1016/j.langsci.2019.101268
    [Google Scholar]
  53. Marcus, G., Neuman, W., & MacKuen, M. (2000). Affective intelligence and political judgment. Chicago, IL: University of Chicago Press.
    [Google Scholar]
  54. Marwick, A. (2018). Why do people share fake news? A sociotechnical model of media effect. Georgetown Law Technology Review, 2(2), 474–512.
    [Google Scholar]
  55. Monakhov, S. (2020). Early detection of internet trolls: Introducing an algorithm based on word pairs / single words multiple repetition ratio. PLOS ONE, 15(8), e0236832. https://doi.org/10.1371/journal.pone.0236832
    [Google Scholar]
  56. Moreno, E. (2020, April21). Russia, China, Iran using similar themes in disinformation campaigns: report. The Hill. Retrieved from https://thehill.com/policy/international/493851-russia-china-iran-using-similar-themes-in-disinformation-campaigns
    [Google Scholar]
  57. Mueller, S. D., & Saeltzer, M. (2022). Twitter made me do it! Twitter’s tonal platform incentive and its effect on online campaigning. Information, Communication & Society, 25(9), 1247-1272. https://doi.org/10.1080/1369118X.2020.1850841
    [Google Scholar]
  58. Newman, M., Pennebaker, J., Berry, D., & Richards, J. (2003). Lying words: Predicting deception from linguistic styles. Personality and Social Psychology Bulletin, 29(5), 665–675. https://doi.org/10.1177/0146167203029005010
    [Google Scholar]
  59. Park, S., Strover, S., Choi, J., & Schnell, M. (2021). Mind games: A temporal sentiment analysis of the political messages of the Internet Research Agency on Facebook and Twitter. New Media & Society, 14614448211014355. https://doi.org/10.1177/14614448211014355
    [Google Scholar]
  60. Pazelskaya, A. (2012). Verbal prefixes and suffixes in nominalization: Grammatical restrictions and corpus data. Oslo Studies in Language, 4(1), 245-261. https://doi.org/10.5617/osla.153
    [Google Scholar]
  61. Pearson, G. (2020). Sources on social media: Information context collapse and volume of content as predictors of source blindness. New Media & Society, 1461444820910505. https://doi.org/10.1177/1461444820910505
    [Google Scholar]
  62. Pennebaker, J. (2011). The secret life of pronouns. New Scientist, 211(2828), 42-45. https://doi.org/10.1016/S0262-4079(11)62167-2
    [Google Scholar]
  63. Perloff, R. M. (2021). The dynamics of political communication: Media and politics in a digital age. London, UK: Routledge.
    [Google Scholar]
  64. Petty, R., & Cacioppo, J. (1984). The effects of involvement on responses to argument quantity and quality: Central and peripheral routes to persuasion. Journal of Personality and Social Psychology, 46(1), 69–81. https://doi.org/10.1037/0022-3514.46.1.69
    [Google Scholar]
  65. Piotrkowicz, A., Dimitrova, V., Otterbacher, J., & Markert, K. (2017). The impact of news values and linguistic style on the popularity of headlines on Twitter and Facebook. Proceedings of the International AAAI Conference on Web and Social Media, 11(1), 767-774. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/14979
    [Google Scholar]
  66. Proctor, K., Lily, I., & Su, W. (2011). The 1st person plural in political discourse— American politicians in interviews and in a debate. Journal of Pragmatics, 43(13), 3251–3266. https://doi.org/10.1016/j.pragma.2011.06.010
    [Google Scholar]
  67. Rudat, A., & Buder, J. (2015). Making retweeting social: The influence of content and context information on sharing news in Twitter. Computers in Human Behavior, 46(2015), 75-84. https://doi.org/10.1016/j.chb.2015.01.005
    [Google Scholar]
  68. Săftoiu, R., & Toader, A. (2018). The persuasive use of pronouns in action games of election campaigns. Language and Dialogue, 8(1), 21-42. https://doi.org/10.1075/ld.00003.saf
    [Google Scholar]
  69. Scissors, L. E., Gill, A. J., Geraghty, K., & Gergle, D. (2009, April). In CMC we trust: The role of similarity. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 527-536). https://doi.org/10.1145/1518701.1518783
    [Google Scholar]
  70. Shoemaker, P., & Cohen, A. (2006). News around the world: Content, practitioners, and the public. London, UK: Routledge.
    [Google Scholar]
  71. Simmons, E. (2006). The grammars of reading. English Journal, 95(5), 48-52. https://doi.org/10.2307/30046588
    [Google Scholar]
  72. Slobin, D. (1969). Universals of grammatical development in children. In G.Flores d’Arcais & W.Levelt (Eds.), Advances in psycholinguistics (pp. 174–186). Amsterdam, The Netherlands: North Holland Publishing.
    [Google Scholar]
  73. Sornig, K. (1989). Some remarks on linguistic strategies of persuasion. In R.Wodak (Ed.), Language, power and ideology: Studies in political discourse (pp. 95-114). Amsterdam, The Netherlands: John Benjamins Publishing.
    [Google Scholar]
  74. Staples, S., Biber, D., & Reppen, R. (2018). Using corpus-based register analysis to explore the authenticity of high-stakes language exams: A register comparison of TOEFL iBT and disciplinary writing tasks. The Modern Language Journal, 102(2), 310-332. https://doi.org/10.1111/modl.12465
    [Google Scholar]
  75. Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1-26. https://doi.org/10.1145/3359229
    [Google Scholar]
  76. Talmy, L. (1988). Force dynamics in language and cognition. Cognitive Science, 12(1), 49-100. https://doi.org/10.1207/s15516709cog1201_2
    [Google Scholar]
  77. Temperley, D. (2007). Minimization of dependency length in written English. Cognition, 105(2), 300-333. https://doi.org/10.1016/j.cognition.2006.09.011
    [Google Scholar]
  78. Twitter (2018, January19). Update on Twitter’s review of the 2016 US election. Retrieved from https://blog.twitter.com/official/en_us/topics/company/2018/2016-election-update.html
  79. Unlu, E., & Hatipoglu, Ç. (2012). The acquisition of the copula be in present simple tense in English by native speakers of Russian. System, 40(2), 255-269. https://doi.org/10.1016/j.system.2012.04.002
    [Google Scholar]
  80. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking. Council of Europe report, DGI (2017), 9. Retrieved from https://firstdraftnews.com/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-de%CC%81sinformation-1.pdf?x29719.
    [Google Scholar]
  81. Wojcik, S., & Hughes, A. (2019, April24). Sizing up Twitter users. Pew Research Center. Retrieved from https://www.pewresearch.org/internet/2019/04/24/sizing-up-twitter-users/
    [Google Scholar]
  82. Zerback, T., & Töpfl, F. (2022). Forged examples as disinformation: The biasing effects of political astroturfing comments on public opinion perceptions and how to prevent them. Political Psycholog, 43(3), 399-418. https://doi.org/10.1111/pops.12767
    [Google Scholar]
  83. Zerback, T., Töpfl, F., & Knöpfle, M. (2020). The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society, 23(5), 1080-1098. https://doi.org/10.1177/1461444820908530
    [Google Scholar]
  84. Bhutani, N., Jagadish, H. V., & Radev, D. (2016, November). Nested propositions in open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 55-64).
    [Google Scholar]
  85. Domsch, C., Richels, C., Saldana, M., Coleman, C., Wimberly, C., & Maxwell, L. (2012). Narrative skill and syntactic complexity in school-age children with and without late language emergence. International Journal of Language & Communication Disorders, 47(2), 197-207.
    [Google Scholar]
  86. Green, C. (2019). A multilevel description of textbook linguistic complexity across disciplines: Leveraging NLP to support disciplinary literacy. Linguistics and Education, 53, 100748.
    [Google Scholar]
  87. Halpern, J. Y. (1995). The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic. Artificial Intelligence, 75(2), 361-372.
    [Google Scholar]
  88. Haspelmath, M. (1997). Indefinite pronouns (p. 380). Oxford University Press.
    [Google Scholar]
  89. Ionin, T., Zubizarreta, M. L., & Philippov, V. (2009). Acquisition of article semantics by child and adult L2-English learners. Bilingualism: Language and cognition, 12(3), 337-361.
    [Google Scholar]
  90. Ionin, T., & Wexler, K. (2002). Optional verb raising in L2 acquisition: Evidence from L1- Russian learners of English. In Proceedings of the GALA 2001 Conference on Language Acquisition. Lisbon: Associação Portuguese de Linguistica (pp. 134-41).
    [Google Scholar]
  91. Khan, S., & Ahmad (2021). Tweet so good that they can’t ignore you! Suggesting posting strategies to micro-celebrities for online engagement. Online Information Review. 1-19.
    [Google Scholar]
  92. Luarn, P., Lin, Y. F., & Chiu, Y. P. (2015). Influence of Facebook brand-page posts on online engagement. Online Information Review, 39, 1-16.
    [Google Scholar]
  93. Pazelskaya, A. (2012). Verbal prefixes and suffixes in nominalization: Grammatical restrictions and corpus data. Oslo Studies in Language, 4(1).
    [Google Scholar]
  94. Suh, B., Hong, L., Pirolli, P., & Chi, E. H. (2010). Want to be Retweeted? Large Scale Analytics on Factors Impacting Retweet in Twitter Network. 2010 IEEE Second International Conference on Social Computing, 177–184.
    [Google Scholar]
  95. Tuniyan, E. (2013). An investigation of optional subject-verb agreement marking in low and high intermediate adult L2 speakers of English: Partial support for the Contextual Complexity Hypothesis. Department of Language and Linguistics, 88.
    [Google Scholar]
  96. Unlu, E. A., & Hatipoglu, Ç. (2012). The acquisition of the copula be in present simple tense in English by native speakers of Russian. System, 40(2), 255-269.a.
    [Google Scholar]
  97. Yang, Q., Tufts, C., Ungar, L., Guntuku, S., & Merchant, R. (2018). To retweet or not to retweet: Understanding what features of cardiovascular tweets influence their retransmission. Journal of health communication, 23(12), 1026-1035.
    [Google Scholar]
  98. Zappavigna, M. (2011). Ambient affiliation: A linguistic perspective on Twitter. New Media & Society, 13(5), 788-806.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.5117/CCR2022.2.008.SUK
Loading
/content/journals/10.5117/CCR2022.2.008.SUK
Loading

Data & Media loading...

Keyword(s): computational social science; corpus linguistics; Disinformation; Internet Research Agency; Twitter

Most Cited Most Cited RSS feed