Home » Study Finds LLMs Cannot Reliably Simulate Human Psychology

Study Finds LLMs Cannot Reliably Simulate Human Psychology

Researchers from Bielefeld University and Purdue University have published Large Language Models Do Not Simulate Human Psychology, presenting conceptual and empirical evidence that large language models (LLMs) cannot be treated as consistent simulators of human psychological responses (Schröder et al. 2025).

Background and scope

Since 2018, LLMs such as GPT-3.5, GPT-4, and Llama-3.1 have been applied to tasks from content creation to education (Schröder et al. 2025). Some researchers have proposed that LLMs could replace human participants in psychological studies by responding to prompts that describe a persona, present a stimulus, and provide a questionnaire (Almeida et al. 2024; Kwok et al. 2024). The CENTAUR model, released by Binz et al. (2025), was fine-tuned on approximately 10 million human responses from 160 experiments to generate human-like answers in such settings (Binz et al. 2025).

Earlier work found high alignment between LLM and human moral judgments. For example, Dillion et al. (2023) reported a correlation of 0.95 between GPT-3.5 ratings and human ratings across 464 moral scenarios. Follow-up studies with GPT-4o suggested moral reasoning judged as more trustworthy and correct than human or expert ethicist responses (Dillion et al. 2025). Specialized models like Delphi, trained on crowdsourced moral judgments, also outperformed general-purpose LLMs in moral reasoning tasks (Jiang et al. 2025).

Conceptual critiques

The authors summarize multiple critiques of treating LLMs as simulators of human psychology. First, LLMs often respond inconsistently to instructions, with output quality highly dependent on prompt detail and framing (Zhu et al. 2024; Wang et al. 2025). Second, results vary across model types and re-phrasings of the same prompt (Ma 2024). Third, while LLMs can approximate average human responses, they fail to reproduce the full variance of human opinions, including cultural diversity (Rime 2025; Kwok et al. 2024).

Bias is another concern. LLMs inherit cultural, gender, occupational, and socio-economic biases from training data, which may differ systematically from human biases (Rossi et al. 2024). They also produce “hallucinations” — factually incorrect or fictional content — without an internal mechanism to distinguish truth (Huang et al. 2025; Reddy et al. 2024).

Theoretical work supports these critiques. Van Rooij et al. (2024) mathematically demonstrated that no computational model trained solely on observational data can match human responses across all inputs. From a machine learning perspective, the authors argue that LLM generalization is limited to token sequences similar to the training data, not to novel inputs with different meanings. This is critical because using LLMs as simulated participants requires generalizing meaningfully to new experimental setups.

Empirical testing with moral scenarios

The team tested their argument using 30 moral scenarios from Dillion et al. (2023) with human ratings from prior studies (Clifford et al. 2015; Cook and Kuhn 2021; Effron 2022; Grizzard et al. 2021; Mickelberg et al. 2022). Each scenario was presented in its original wording and in a slightly reworded version with altered meaning but similar token sequences. For example, “cut the beard off a local elder to shame him” became “cut the beard off a local elder to shave him” (Schröder et al. 2025).

Human participants (N=374, Mage=39.54, SD=12.53) were recruited via Prolific and randomly assigned to original or reworded conditions. They rated each behavior on a scale from -4 (extremely unethical) to +4 (extremely ethical). LLM ratings were obtained from GPT-3.5, GPT-4 (mini), Llama-3.1 70b, and CENTAUR, with each query repeated 10 times to account for random variation (Schröder et al. 2025).

Results

For original items, correlations between human and LLM ratings replicated prior findings: GPT-3.5 and GPT-4 both showed correlations above 0.89 with human ratings, while Llama-3.1 and CENTAUR also showed high alignment (r ≥ 0.80) (Schröder et al. 2025). However, for reworded items, human ratings dropped in correlation to 0.54 with their original-item ratings, reflecting sensiti

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *