Sci Rep. 2025 Oct 16;15(1):36197. doi: 10.1038/s41598-025-19980-x.

ABSTRACT

Recent advancements in Large Language Models (LLMs) present promising opportunities for applying these technologies to aid the detection and monitoring of Major Depressive Disorder. However, demographic biases in LLMs may present challenges in the extraction of key information, where concerns persist about whether these models perform equally well across diverse populations. This study investigates how demographic factors, specifically age and gender affect the performance of LLMs in classifying depression symptom severity across multilingual datasets. By systematically balancing and evaluating datasets in English, Spanish, and Dutch, we aim to uncover performance disparities linked to demographic representation and linguistic diversity. The findings from this work can directly inform the design and deployment of more equitable LLM-based screening systems. Gender had varying effects across models, whereas age consistently produced more pronounced differences in performance. Additionally, model accuracy varied noticeably across languages. This study emphasizes the need to incorporate demographic-aware models in health-related analyses. It raises awareness of the biases that may affect their application in mental health and suggests further research on methods to mitigate these biases and enhance model generalization.

PMID:41102216 | DOI:10.1038/s41598-025-19980-x