Enhancing questioning skills through child avatar chatbot training with feedback
Permanent lenke
https://hdl.handle.net/10037/30668Dato
2023-07-13Type
Journal articleTidsskriftartikkel
Peer reviewed
Forfatter
Røed, Ragnhild Klingenberg; Baugerud, Gunn Astrid; Zohaib Hassan, Syed; Shafiee Sabet, Saeed; Salehi, Pegah; Powell, Martine B.; Riegler, Michael Alexander; Halvorsen, Pål; Sinkerud Johnson, MiriamSammendrag
Training child investigative interviewing skills is a specialized task. Those being
trained need opportunities to practice their skills in realistic settings and receive
immediate feedback. A key step in ensuring the availability of such opportunities
is to develop a dynamic, conversational avatar, using artificial intelligence (AI)
technology that can provide implicit and explicit feedback to trainees. In the
iterative process, use of a chatbot avatar to test the language and conversation
model is crucial. The model is fine-tuned with interview data and realistic
scenarios. This study used a pre-post training design to assess the learning
effects on questioning skills across four child interview sessions that involved
training with a child avatar chatbot fine-tuned with interview data and realistic
scenarios. Thirty university students from the areas of child welfare, social
work, and psychology were divided into two groups; one group received direct
feedback (n = 12), whereas the other received no feedback (n = 18). An automatic
coding function in the language model identified the question types. Information
on question types was provided as feedback in the direct feedback group only.
The scenario included a 6-year-old girl being interviewed about alleged physical
abuse. After the first interview session (baseline), all participants watched a video
lecture on memory, witness psychology, and questioning before they conducted
two additional interview sessions and completed a post-experience survey. One
week later, they conducted a fourth interview and completed another postexperience survey. All chatbot transcripts were coded for interview quality. The
language model’s automatic feedback function was found to be highly reliable
in classifying question types, reflecting the substantial agreement among the
raters [Cohen’s kappa (κ) = 0.80] in coding open-ended, cued recall, and closed
questions. Participants who received direct feedback showed a significantly
higher improvement in open-ended questioning than those in the non-feedback
group, with a significant increase in the number of open-ended questions used
between the baseline and each of the other three chat sessions. This study
demonstrates that child avatar chatbot training improves interview quality with
regard to recommended questioning, especially when combined with direct
feedback on questioning.
Forlag
Frontiers MediaSitering
Røed, Baugerud, Zohaib Hassan, Shafiee Sabet, Salehi, Powell, Riegler, Halvorsen, Sinkerud Johnson. Enhancing questioning skills through child avatar chatbot training with feedback. Frontiers in Psychology. 2023Metadata
Vis full innførselSamlinger
Copyright 2023 The Author(s)