JMIR Hum Factors 2023 (May 05); 10(1):e43227
HTML (open access): https://humanfactors.jmir.org/2023/1/e43227
PDF (free): https://humanfactors.jmir.org/2023/1/e43227/PDF
Background: Reducing lifestyle risk behaviors among adolescents depends on access to age-appropriate health promotion information. Chatbots—computer programs designed to simulate conversations with human users—have the potential to deliver health information to adolescents to improve their lifestyle behaviors and support behavior change, but research on the feasibility and acceptability of chatbots in the adolescent population is unknown.
Objective: This systematic scoping review aims to evaluate the feasibility and acceptability of chatbots in nutrition and physical activity interventions among adolescents. A secondary aim is to consult adolescents to identify features of chatbots that are acceptable and feasible…
The outcome of this study seems to suggest that the use of chatbots for this purpose was inappropriate. Although I did not read the entire paper, this seems a likely conclusion for most topics at our current stage. It seems that there could be many failure points for this application. Was it a failure of the Chatbot in this study, or could it have been an experience design issue?
Hi Sheena and Douglas,
Yeah, it was interesting to see that it seems to be a failure – I think my interpretation would be something along the lines of it being too “new” in the game to become successful – as noted in the results, it seems the failure surrounds ethical concerns and the use of false/misleading information.
Similar to Douglas, I didn’t read the entire paper, but coming from a background in research, it simply appears there has not been enough research in this area, and likely enough use of the chatbots to draw conclusions. While this study was testing feasibility, this could be studied further: perhaps going through a Research Ethics Board to address ethical concerns, and widening the validity of the sources used to feed the AI/chatbot to address the use of false/misleading information.
Hi Douglas and Emma
Emma,I agree with your interpretation. Likewise, i don’t see this quite as a failure, rather a step forward as it is a preliminary report on the present research idea.
Given the significant problems and mortality among young persons with eating disorders, this study would impel researchers to explore further into the Chatbot arena. .
We lack sufficient AI tools to address this topic, which is ironic, when considering how AI devices have become almost part of our lives and most significantly,. how AI and chatbots have formed close bonds with so many persons, especially young adults.
Having a device which would act as a tool for supporting the various stages of eating disorders and one which would seem familiar and yet supportive is laudable. Yes, I agree, there is room for much more research but the potential could be really significant.