Can LLM “Self-report”?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots
H. Zou, P. Wang, Z. Yan, T. Sun, Z. Xiao (Submitted).
Keywords: Human Factors in NLP; Evaluation Methodologies
Personality design plays an important role in chatbot development. From rule-based chatbots to LLM-based chatbots, evaluating the effectiveness of personality design has become more challenging due to the increasingly open-ended interactions. A recent popular approach uses self-report questionnaires to assess LLM-based chatbots’ personality traits. However, such an approach has raised serious validity concerns: Can LLM-based chatbots ``self-report’’ their personality? We created 500 chatbots with various personality designs and evaluated the validity of self-reported personality scales in LLM-based chatbot’s personality evaluation. Our findings indicate that the chatbot’s answers on human personality scales exhibit weak correlations with both user perception and interaction quality, which raises both criterion and predictive validity concerns of such a method. Further analysis revealed the role of task context and interaction in the chatbot’s personality design assessment. We discuss the design implications for building contextualized and interactive evaluation of the chatbot’s personality design.
This paper is under review.