The development of AI systems aimed at enhancing the abilities of older adults facing cognitive decline gives rise to a myriad of ethical issues.
Voice assistant technologies like Alexa or Google Assistant are commonly used for basic tasks such as making lists and checking the weather. However, the potential of these technologies goes beyond simple tasks. Imagine if these AI assistants could summarize doctor’s appointments, remind individuals to take medications, prioritize schedules, and even generate shopping reminders based on recipes without requiring explicit prompts from the user. By leveraging artificial intelligence to alleviate cognitive load for common tasks, these smart assistants could significantly contribute to helping older adults maintain their independence and autonomy.
While next-generation smart assistants are not yet available in the market, ongoing research is actively working towards their development. These advanced systems aim to be proactive, meaning they can anticipate users’ desires and needs, and even facilitate social interactions with their support networks. However, as these systems are designed to enhance the abilities of older adults facing cognitive decline, a wide array of ethical issues comes into consideration.
Recognizing the importance of addressing ethical concerns in the development of the next generation of smart assistants, researchers from the NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING) have taken the initiative to outline some of these issues. Their article, titled “Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults,” was published in the journal IEEE Transactions on Technology and Society, aiming to encourage designers to consider these aspects while creating advanced smart assistants.
Jason Borenstein, the professor of ethics and director of Graduate Research Ethics Programs at Georgia Tech, highlighted the importance of addressing ethical issues early in the development of advanced smart assistant systems. He emphasized the need for designers to consider these issues to ensure the safety and security of users, as families might set up their relatives with such systems without fully comprehending potential risks if ethical concerns are not properly addressed. The aim is to provide designers with a comprehensive understanding of the ethical landscape surrounding smart assistants before they become integrated into people’s homes.
The researchers from AI-CARING have highlighted that individuals who depend on AI systems become uniquely vulnerable to the system’s shortcomings. This vulnerability is particularly concerning for individuals with age-related cognitive impairment, who rely on the technology for complex forms of assistance. As their health declines, their vulnerability increases further. When these systems fail to function properly, it poses a significant risk to the welfare of older adults.
According to Alex John London, the lead author of the paper and K&L Gates Professor of Ethics and Computational Technologies at Carnegie Mellon University, a system’s mistake might not be a significant concern when used for trivial tasks like helping with movie selection. However, relying on the system for critical tasks, such as medication reminders, becomes a serious issue. If the system fails to remind you or provides incorrect information regarding your medication, it can lead to significant problems.
The researchers emphasize that creating a user-centric system that genuinely prioritizes well-being requires careful consideration of several factors. Designers must address issues like trust, reliance, privacy, and the evolving cognitive abilities of the user. Additionally, it is crucial to ensure that the system aligns with the user’s objectives rather than catering to external parties, such as family members or companies seeking to promote products to the user.
Creating such a system would necessitate a sophisticated and continuously adapting model of the user and their preferences, utilizing data from diverse sources. To function optimally, a smart assistant might need to share certain user information with other entities, which could potentially expose the user to risks.
As an illustration, a user might desire the physician’s office to be informed about their request for a doctor’s appointment. However, they might not want this information to be shared with all their children, or they may only wish to share it with one child and not others. The researchers suggest that designers should contemplate ways of sharing personal information that respect and empower the user’s ability to control the dissemination of such information.
Both over trust and under trust in the system’s capabilities are critical factors to take into account. Over trust refers to when individuals assign the technology abilities it does not possess, which can lead to risks when the system fails to perform as expected. On the other hand, under trust can also pose challenges. If the system is capable of assisting with essential tasks, but the person chooses not to use it, they may be left without the necessary help. Striking the right balance of trust is crucial in optimizing the system’s benefits while mitigating potential pitfalls.
London stated that the aim of their analysis is to highlight the challenges in developing genuinely assistive AI systems, encouraging the integration of these considerations from the outset during the design of AI. By doing so, stakeholders can establish performance benchmarks that align with ethical requirements. This proactive approach is more effective than attempting to address ethical concerns after the system has already undergone design, development, and testing.
Borenstein emphasizes that when developing and introducing smart assistants into homes, the well-being and objectives of the primary user must be the top priority. While designers may have good intentions, he believes that cross-disciplinary exchanges and discussions with individuals possessing diverse perspectives on such technologies can be beneficial. This piece of the puzzle can contribute valuable insights to inform the design process effectively.