Blog Logo
TAGS

Boosting Theory-of-Mind Performance in Large Language Models via Prompting

Large language models have shown remarkable success in many tasks, but they still face challenges in complex reasoning. One area of specific interest is theory-of-mind (ToM) reasoning, which involves understanding agents beliefs, goals, and mental states. This study measures the ToM performance of GPT-4 and three GPT-3.5 variants, and investigates the effectiveness of in-context learning in improving their ToM comprehension. The study found that appropriate prompting enhances LLM ToM reasoning and underscores the context-dependent nature of LLM cognitive capacities. The capacity of LLMs to reliably perform ToM reasoning is important for several reasons, including social understanding and inferential reasoning.