Computational Communication Research - Volume 8, Issue 2, 2026
Volume 8, Issue 2, 2026
-
-
transforEmotion: An Open-Source R Package for Emotion Analysis Using Transformer-Based Generative AI Models
show More to view fulltext, buy and share links for:transforEmotion: An Open-Source R Package for Emotion Analysis Using Transformer-Based Generative AI Models show Less to hide fulltext, buy and share links for: transforEmotion: An Open-Source R Package for Emotion Analysis Using Transformer-Based Generative AI ModelsAuthors: Aleksandar Tomašević, Hudson Golino & Alexander ChristensenThis software demonstration article introduces transforEmotion, an open-source R package that addresses critical bottlenecks in communication research emotion analysis. Communication researchers currently face three key barriers: (1) modal fragmentation requiring separate tools for text/image/video analysis, (2) rigid emotion taxonomies that don't match theoretical frameworks, and (3) irreproducible workflows dependent on commercial APIs. transforEmotion removes these bottlenecks through unified multimodal processing, zero-shot classification with arbitrary labels, local inference capabilities, and seamless R integration. This enables systematic investigation of emotion dynamics across communication contexts and modalities that were previously technically difficult to analyze.
-
-
-
Visual Framing at Scale: A Theory-Driven Computational Framework for Analyzing Protest Imagery with Generative AI
show More to view fulltext, buy and share links for:Visual Framing at Scale: A Theory-Driven Computational Framework for Analyzing Protest Imagery with Generative AI show Less to hide fulltext, buy and share links for: Visual Framing at Scale: A Theory-Driven Computational Framework for Analyzing Protest Imagery with Generative AIAuthors: Sang Jung Kim & Lei ChenThis study presents a theory-driven, three-stage computational framework for analyzing visual framing in protest imagery. Focusing on the Black Lives Matter movement, we examine how visual elements contribute to two well-established frames in protest media coverage: the protest paradigm and solidarity framing. Leveraging GPT-4o and OpenCV, our framework extracts denotative and semiotic features—such as police presence, contestation, solidarity actions, and color contrast—and links these features to higher-order frame classifications using interpretable logistic regression models. The framework includes: (1) feature definition and validation through generative AI and a feature extraction tool, supported by human coders; (2) model training; and (3) predictive application to unseen images. Results show strong alignment between human and machine annotations, as well as high predictive accuracy in identifying the protest paradigm or solidarity frame in BLM images. We also introduce an intra-prompt stability score for the generative AI model to help mitigate hallucination and enhance the reliability of its outputs. This study offers a scalable, replicable, and interpretable approach to visual framing analysis, bridging communication theory with advanced computational tools in the study of visual political communication.
-
-
-
Positivity Bias in AI-Generated Summaries of User-Generated Content: Exploring Its Sources and Impact on Public Sentiment
show More to view fulltext, buy and share links for:Positivity Bias in AI-Generated Summaries of User-Generated Content: Exploring Its Sources and Impact on Public Sentiment show Less to hide fulltext, buy and share links for: Positivity Bias in AI-Generated Summaries of User-Generated Content: Exploring Its Sources and Impact on Public SentimentAuthors: Anna Yan Liu & Maggie Mengqing ZhangRecently, platforms have been increasingly deploying generative AI (GenAI) to summarize user-generated content (UGC) into AI-generated summaries (AIGS). However, the potential bias in AIGS and its impact on the public remain inadequately examined. We used Weibo, a leading social media in China, as a case to investigate these important questions, focusing on public sentiments. Specifically, we explored whether AIGS are biased in representing emotions in UGC and whether such representation influences subsequent public sentiment. We empirically identify two sources of bias in the algorithmic processes underlying the production of AIGS from UGC: the sampling process, in which GenAI selects a subset of UGC, and the summarizing process, in which the summary is generated from the sampled content. Comparing emotions in AIGS, sampled UGC, and all UGC, we found evidence of bias in both processes. In our case, GenAI tends to favor positive UGC during the sampling process and produces summaries that further amplify this positivity, leading to an over-representation of positive sentiments in AIGS. Additionally, we utilized a Difference-in-Differences (DiD) design to explore AIGS in public sentiment dynamics. Findings suggest that AIGS alone are insufficient to influence public sentiment significantly. Overall, this study provides important implications for deploying GenAI in public online discussions.
-
-
-
Seeing the Surreal: Mapping Surrealism in Photorealistic AI-Generated Images Using Large Language Models
show More to view fulltext, buy and share links for:Seeing the Surreal: Mapping Surrealism in Photorealistic AI-Generated Images Using Large Language Models show Less to hide fulltext, buy and share links for: Seeing the Surreal: Mapping Surrealism in Photorealistic AI-Generated Images Using Large Language ModelsAuthors: Xinyi Liu, Yingdan Lu, Qiyao Peng, Sijia Qian, Yilang Peng & Cuihua ShenPhotorealistic AI-generated images (AIGIs) are increasingly indistinguishable from real photographs, raising significant social concerns. While prior research focuses on the production quality and detection of photorealistic AIGIs, such research often overlooks their expressive features. This study focuses on surrealism as a key feature of AIGIs, and introduces the concept of algorithmic surrealism to capture AIGIs' algorithmically driven and public accessible generative processes and consequences. Using 28,290 AIGIs collected from Instagram creators and a mixed-methods, Large Language Model (LLM)-assisted framework, we categorized physical, behavioral, and contextual surrealism at scale and found a pervasive presence of surrealism in AIGIs. Topic network and qualitative analyses show that algorithmic surrealism often appears in hybrid forms, indicates patterns of visual excess, reinforces stereotypes, transforms technical flaws into surreal aesthetic features, and exhibits visual homogenization tendencies. This study advances the theoretical understanding of surrealism and photorealism in the age of generative AI. Methodologically, it contributes to computational social science by demonstrating an LLM-based framework that integrates computational, qualitative, and network analyses to examine complex visual concepts.
-
-
-
Large Language Model-driven Multi-Agent Simulation for Fake News Diffusion Under Different Network Structures
show More to view fulltext, buy and share links for:Large Language Model-driven Multi-Agent Simulation for Fake News Diffusion Under Different Network Structures show Less to hide fulltext, buy and share links for: Large Language Model-driven Multi-Agent Simulation for Fake News Diffusion Under Different Network StructuresAuthors: Xinyi Li, Yu Xu, Yongfeng Zhang & Edward MalthouseThe spread of misinformation threatens societal trust and democratic processes, motivating extensive research on how misinformation diffuses through social networks in communication research. While simulation has long been a central tool for studying such processes, existing agent-based models typically rely on hand-crafted contagion rules that limit behavioral expressiveness. In this work, we propose a novel approach of using generative AI (GenAI) as a methodological tool for simulating complex dynamics in information ecosystems. Specifically, we introduce a large language model (LLM)-driven multi-agent simulation framework in which LLM-driven agents make forwarding decisions conditioned on psychological traits, while misinformation propagates through given network structures. Simulations conducted across network topologies indicate that LLM-driven agents generate diffusion patterns that are both internally coherent and sensitive to structural properties of the network. Moreover, they exhibit emergent behavioral phenomena that are not replicated by conventional rule-based models. To assess external plausibility, we further conduct simulations on an empirical community derived from the Higgs dataset and show that the resulting diffusion patterns exhibit key qualitative regularities observed in real-world rumor propagation. Finally, we evaluate several intervention strategies and find that their effectiveness varies across network structures. Taken together, our results suggest that LLMs can serve as a flexible and expressive simulation component for studying information diffusion, enabling network-aware and behaviorally grounded analysis of misinformation dynamics beyond traditional modeling approaches.
-
-
-
Influence in Motion: Tracing Persuasive Dynamics via Multi‐Agent Networks
show More to view fulltext, buy and share links for:Influence in Motion: Tracing Persuasive Dynamics via Multi‐Agent Networks show Less to hide fulltext, buy and share links for: Influence in Motion: Tracing Persuasive Dynamics via Multi‐Agent NetworksAuthors: Ming Huang & Zi-Ke ZhangA bottom–up, reproducible multi-agent simulation framework is presented for investigating opinion dynamics via LLM-based agents embedded in an endogenously evolving small-world network. The study simulates cognitive–affective agents deliberating the real-world controversy over rounds, generating natural-language exchanges, instantaneous appraisals, and systemic reflections. Network topology co-evolves through agent-driven edge rewiring. Community detection and intra- community stance-variance analyses reveal three temporal phases—turbulence, coalescence, consolidation—with opinion variance declining. Correlations between closeness centrality and Influence Score uncover heterogeneous influence patterns, exemplified by positively, negatively, and negligibly correlated agent archetypes. These results demonstrate that LLM-based generative agents can (1) reproduce key opinion and network structural dynamics (H1), (2) self-organize into stable communities with constructive opinion aggregation via endogenous rewiring (H2), and (3) convert network structure into persuasive influence through adaptive discourse strategies (H3). This framework bridges micro-level cognitive–affective processes and macro-level network phenomena, offering a versatile platform for computational communication research.
-
- Article
-
-
-
Gender Representation in Large Language Models: A Cross-Linguistic and Cross-Model Analysis
show More to view fulltext, buy and share links for:Gender Representation in Large Language Models: A Cross-Linguistic and Cross-Model Analysis show Less to hide fulltext, buy and share links for: Gender Representation in Large Language Models: A Cross-Linguistic and Cross-Model AnalysisAbstractThe representation of gender in large language models (LLMs) can reflect and reinforce existing sociocultural inequalities. However, the nature of such gender biases can differ significantly across languages, influenced by linguistic features and a model’s training data. In this study, we investigate gender representation in 24 open-weight LLMs across six linguistically distinct languages (English, German, Russian, Czech, Albanian, and Serbian). Extending beyond binary frameworks, we incorporate nonbinary individuals as response options and examine associations across psychometrically validated stereotype dimensions (agency, communality, dominance, weakness, and giftedness). Our analysis accounts for variations between and within model families and differences in sampling parameters. The results reveal that traditional gender stereotypes persist with varying degrees of strength, while nonbinary associations show substantial cross-linguistic variations. Temperature analysis demonstrates that such associations are deeply embedded in model parameters rather than being artifacts of sampling procedures. These findings suggest that gender bias identification and potential mitigation in LLMs are shaped by both contextual and technical factors. Overall, our findings challenge the notion that gender bias is a simple, measurable construct, highlighting its complex, context-dependent nature across languages, models, and stereotype dimensions. Effective bias mitigation requires interventions at the level of training data, model architecture, or alignment procedures.
-
-
-
-
Reddit Conversation Laboratory: Field experiments with conversational AI agents
show More to view fulltext, buy and share links for:Reddit Conversation Laboratory: Field experiments with conversational AI agents show Less to hide fulltext, buy and share links for: Reddit Conversation Laboratory: Field experiments with conversational AI agentsAuthors: Jeremy Foote, Loizos Bitsikokos, Hitesh Goel & Deepak KumarAbstractLLM-based generative AI agents are the first autonomous technologies that can act as true conversational partners. Communication researchers and others have already begun to explore the influence of AI conversations on their interlocutors. We present a methodological framework and software tool for conducting field experiments with AI agents on Reddit. The Reddit Conversational Laboratory is Python-based software that identifies potential participants, messages and consents them, and conducts conversational experiments using researcher-designed AI chatbots. In addition to storing all conversations, the software can also record participant behavior before and after conversations. In this paper, we outline design principles, best practices for using the tool, and possibilities for future extensions.
-
Most Read This Month
Most Cited Most Cited RSS feed
-
-
Computational observation
Authors: Mario Haim & Angela Nienierza
-
- More Less