Virtual conversational agents versus online forms: Patient experience and preferences for health data collection

Soni, Hiral and Ivanova, Julia and Wilczewski, Hattie and Bailey, Alexandra and Ong, Triton and Narma, Alexa and Bunnell, Brian E. and Welch, Brandon M. (2022) Virtual conversational agents versus online forms: Patient experience and preferences for health data collection. Frontiers in Digital Health, 4. ISSN 2673-253X

[thumbnail of pubmed-zip/versions/1/package-entries/fdgth-04-954069/fdgth-04-954069.pdf] Text
pubmed-zip/versions/1/package-entries/fdgth-04-954069/fdgth-04-954069.pdf - Published Version

Download (869kB)

Abstract

Objective: Virtual conversational agents, or chatbots, have emerged as a novel approach to health data collection. However, research on patient perceptions of chatbots in comparison to traditional online forms is sparse. This study aimed to compare and assess the experience of completing a health assessment using a chatbot vs. an online form.

Methods: A counterbalanced, within-subject experimental design was used with participants recruited via Amazon Mechanical Turk (mTurk). Participants completed a standardized health assessment using a chatbot (i.e., Dokbot) and an online form (i.e., REDCap), each followed by usability and experience questionnaires. To address poor data quality and preserve integrity of mTurk responses, we employed a thorough data cleaning process informed by previous literature. Quantitative (descriptive and inferential statistics) and qualitative (thematic analysis and complex coding query) approaches were used for analysis.

Results: A total of 391 participants were recruited, 185 of whom were excluded, resulting in a final sample size of 206 individuals. Most participants (69.9%) preferred the chatbot over the online form. Average Net Promoter Score was higher for the chatbot (NPS = 24) than the online form (NPS = 13) at a statistically significant level. System Usability Scale scores were also higher for the chatbot (i.e. 69.7 vs. 67.7), but this difference was not statistically significant. The chatbot took longer to complete but was perceived as conversational, interactive, and intuitive. The online form received favorable comments for its familiar survey-like interface.

Conclusion: Our findings demonstrate that a chatbot provided superior engagement, intuitiveness, and interactivity despite increased completion time compared to online forms. Knowledge of patient preferences and barriers will inform future design and development of recommendations and best practice for chatbots for healthcare data collection.

Item Type: Article
Subjects: Eurolib Press > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 17 Mar 2023 05:21
Last Modified: 04 Apr 2024 09:01
URI: http://info.submit4journal.com/id/eprint/886

Actions (login required)

View Item
View Item