2004
Volume 8, Issue 1
  • E-ISSN: 2665-9085

Abstract

Computational methods can minimize the time and resources needed to manually code thousands of images. Yet, they also come with challenges, including validation, algorithmic bias, and privacy concerns. Acknowledging that the pictorial turn has now entered a computational phase, this article reports on a manual and automated coding of 7000+ images to better understand online extremist content. Using Rodriguez and Dimitrova’s (2011) four-tiered model of visual framing, the study compares manual and OpenAI’s ChatGpt4o’s coding of Al-Qaeda and ISIS images across the denotative, semiotic, connotative, and ideological levels. AI coding exhibited moderate to strong performance on denotative variables but was weaker in the semiotic and connotative tiers. The study concludes with a discussion of the advantages of human and AI functioning together to better understand visual framing.

Loading

Article metrics loading...

/content/journals/10.5117/CCR2026.1.2.ELDA
2026-01-01
2026-02-16
Loading full text...

Full text loading...

/content/journals/10.5117/CCR2026.1.2.ELDA
Loading
  • Article Type: Research Article
Keyword(s): AI; Al-Qaeda; Extremism; ISIS; Photography; Visual Framing
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error