2004
Volume 8, Issue 2
  • E-ISSN: 2665-9085

Abstract

Abstract

The representation of gender in large language models (LLMs) can reflect and reinforce existing sociocultural inequalities. However, the nature of such gender biases can differ significantly across languages, influenced by linguistic features and a model’s training data. In this study, we investigate gender representation in 24 open-weight LLMs across six linguistically distinct languages (English, German, Russian, Czech, Albanian, and Serbian). Extending beyond binary frameworks, we incorporate nonbinary individuals as response options and examine associations across psychometrically validated stereotype dimensions (agency, communality, dominance, weakness, and giftedness). Our analysis accounts for variations between and within model families and differences in sampling parameters. The results reveal that traditional gender stereotypes persist with varying degrees of strength, while nonbinary associations show substantial cross-linguistic variations. Temperature analysis demonstrates that such associations are deeply embedded in model parameters rather than being artifacts of sampling procedures. These findings suggest that gender bias identification and potential mitigation in LLMs are shaped by both contextual and technical factors. Overall, our findings challenge the notion that gender bias is a simple, measurable construct, highlighting its complex, context-dependent nature across languages, models, and stereotype dimensions. Effective bias mitigation requires interventions at the level of training data, model architecture, or alignment procedures.

Loading

Article metrics loading...

/content/journals/10.5117/CCR2026.2.11.URMA
2026-01-01
2026-04-17

Metrics

Loading full text...

Full text loading...

/deliver/fulltext/26659085/8/2/CCR2026.2.11.URMA.html?itemId=/content/journals/10.5117/CCR2026.2.11.URMA&mimeType=html&fmt=ahah
/content/journals/10.5117/CCR2026.2.11.URMA
Loading
/content/journals/10.5117/CCR2026.2.11.URMA
Loading

Data & Media loading...

  • Article Type: Other
Keyword(s): Bias, Gender, Multilingual, LLM, generative AI
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error