Study finds over 60% of adults want to be notified, with preferences varying by age, gender, race and education
In a recent article published in JAMA Network Openresearchers explored the US public’s perceptions and attitudes about the use of artificial intelligence (AI) in healthcare.
Their findings indicate that the majority of respondents want to be informed about the use of artificial intelligence in the healthcare services they access.
Background
Patient notification is an important part of research and clinical ethics. Informed consent and data privacy laws are fundamental. Applications of artificial intelligence are growing rapidly in all industries, including healthcare.
Although policy frameworks and AI ethics experts emphasize the need for transparency through notification to be a critical part of appropriate use of AI tools, health systems lack standardized policies and recommendations on how to notify patients.
Public expectations on this issue are also not well understood. Research in this area can support health systems and policy makers in setting priorities and strengthening notification processes.
About the study
In 2023, researchers conducted surveys to understand the attitudes of the American public toward healthcare-related applications of artificial intelligence. This survey included a video explaining the use of artificial intelligence in this field and elicited perspectives using scenario-based questions. The researchers validated the research through stakeholder feedback and cognitive interviews.
While the survey was conducted through a representative group of American residents, the researchers oversampled Hispanic and black participants to ensure that group comparisons could be made accurately. Ethical guidelines were followed throughout the procedure and participants gave informed consent.
Participants were asked how important it was for them to be informed about the use of artificial intelligence in the healthcare services they accessed. Possible responses ranged from “very true” with a score of four to “not at all true” with a score of one. The researchers weighted these responses with demographic information such as education, ethnicity, race, age and gender.
Findings
The study included 2,021 adults, whose weighted average response was 3.39 out of 4, which showed a general agreement that notification about the use of artificial intelligence in healthcare was important to them. Overall, the majority of respondents, nearly 63%, said it was very important to them to be notified, while less than 5% did not consider it important.
Females responded with a mean score of 3.45, indicating that they rated the importance of the alert higher than males, with a mean score of 3.32. Older adults, specifically those over 60, expressed the greatest desire for notification, rating its importance at 3.57. In contrast, younger adults between the ages of 18 and 29 showed the least concern, with an average score of 3.14. This difference was statistically significant.
By ethnicity and race, non-Hispanic whites rated the importance of the alert the highest, with an average score of 3.46, while Hispanic respondents reported a score of 3.28, Black respondents reported a score of 3.21, and other groups reported a score of 3.33 . Differences between groups were statistically significant.
Comparing groups based on education, the researchers found that respondents with graduate education and a Bachelor’s degree showed the most concern, rating the importance of the alert at about 3.5. However, those with less than high school rated it lower, with a score of 3.14. Differences between education levels were statistically significant.
conclusions
A previous study found that people preferred to be informed about the use of health information (mean score: 3.15) slightly more than about biological samples (3.13). This study showed an even greater preference for notification about the use of artificial intelligence in healthcare.
Although limited by its cross-sectoral nature, the findings highlight the need for transparent AI practices. Policy makers and health organizations should consider informing the public about AI, focusing not only on if but how and when to inform patients.
Demographic differences highlight ethical issues. Women were more likely than men to value the alert, and white respondents showed a greater preference than black respondents. This suggests that disclosure, while important, must also address historical inequalities.
Collaborative approaches involving experts, the public and patients are essential to creating transparent and trustworthy health systems. Multiple strategies for communicating AI will ensure ethical implementation and build public trust in AI systems in health.