Blog – Ambient Intelligence as a Public Good in Healthcare: What Public Health Ethics Can Teach Us

This editorial appears in the February Issue of the American Journal of Bioethics

Ambient intelligence (AMI) promises to transform healthcare across the continuum by embedding continuous sensing and interpretation into physical spaces where patients receive care. Ambient intelligence systems (AIS) use multimodal sensors to monitor and interpret activity in real time, offering more data, better predictions, and improved patient outcomes with less intrusion on clinicians’ time. Yet the very features that make AIS attractive—ubiquity, continuity, and automation also raises ethical concerns. While AIS allows for richer input streams that can improve detection of patient deterioration i.e. falls and relieve documentation burden, AIS also operates with a wide, indiscriminate gaze capturing patients, families, clinicians, and staff without the opportunity to opt out. Given the range of potentially conflicting interests among a diverse set of stakeholders, managing autonomy, privacy and consent have become extremely complex and without consensus on how they should be resolved.

Herington and Cho propose traditional bioethics is ill-equipped to address these concerns and instead propose a justice-first framework grounded in Rawlsian political philosophy. They argue that the original position and “veil of ignorance” may be better suited to address the challenges of AIS within a larger context. We propose an analogous argument where AIS is not solely an issue of clinical or research ethics, but rather that of public health ethics. Nancy Kass’s influential public health ethics framework was developed to address interventions where individual autonomy cannot be assumed when multiple stakeholders are involved. To utilize Kass’s framework with AIS, two clarifications are needed. First, we must distinguish between private and public domains. Traditional, patient centered bioethics focusing on clinician–patient interactions should still guide decisions in the private domain of individual care, particularly around informed consent, confidentiality, and respect for persons. However, our focus here is within the public domain consisting of shared spaces and infrastructure where AIS would exist in a hospital setting. Second, AIS must be understood as a public good and its use and implementation should not exist to solely benefit individual patients alone, but rather improve safety, quality, and efficiency on a systems level.

The Institute of Medicine describes public health as the effort to “ensure societal conditions under which people can lead healthier lives, mitigating threats to our health through collective action aimed at the community”. Kass, in turn, offers a six-question framework to guide ethical analysis of such interventions: What are the public health goals of the proposed program? How effective is the program in achieving its stated goals? What are the known or potential burdens? Can burdens be minimized, and are there alternatives? Is the program implemented fairly? How can the benefits and burdens of a program be fairly balanced? Within a public health framework, surveillance, data collection, and reporting take on a utilitarian view and such actions are deemed to confer a greater benefit to the overall health of the population compared to the burdens it imposes on individuals. We acknowledge that public health initiatives and AI/AIM have a different set of laws and regulations that support their directive.

What Are the Public Health Goals of the Proposed Program? How Effective Is the Program in Achieving Its Stated Goals?

In utilizing Kass’ framework, one of the goals of AIS is to implement earlier detection of clinical deterioration and reduction of preventable harm. Consider the use case of AIS in intensive care units (ICUs) where the sickest patients require minute-by-minute monitoring as a matter of life and death. Given these labor-intensive tasks, Dai and colleagues implemented an initial effort in the ICU to improve clinical detection and reduce documentation burden. They installed 2 cameras in 8 ICU rooms to continuously collect image data of vitals, ventilators, infusions, patient mobility, therapeutic interventions etc. for algorithm development. The investigators updated their notice of privacy practices and patients were made aware of the collection of image data as part of their routine care, appealing to a utilitarian rationale commonly used in public health research. Patients were informed these data would be used for clinical and research purposes. Additional privacy preserving measures were also taken to mask faces and sensitive areas of the image. Even with these mitigation measures, investigators acknowledged that errors in masking could not be entirely eliminated. Since these collected images were personal health information (PHI) under HIPAA, research use of these data also required storage on a secure server. Given its initial trial, further development and data collection will be required to demonstrate its effectiveness but also show significant promise of what is possible.

What Are the Known or Potential Burdens? Can Burdens Be Minimized, and Are There Alternatives?

The burdens associated with AIS are certainly present, and a public health ethics lens highlights why this deserves serious inquiry- not only because of the privacy and confidentiality considerations described above, but also because of the implications for liberty, self-determination, and justice. Scaling such systems to routine clinical use implies large, multi-institutional data infrastructures whose burdens and benefits may be unevenly distributed. Kass’s framework directs attention to whether these burdens fall disproportionately on certain patient populations or staff groups, and whether they are justified by demonstrated improvements rather than speculative gains. In keeping with public health ethics more broadly, the aim should be to favor approaches that impose fewer infringements on liberty, privacy, and justice without reducing benefit.

There is also a high risk for the dual-use of AIS resulting in intentional or unintentional misuse. In 2020, the NYPD used facial recognition for the mass surveillance of Black Lives Matter protestors, further exacerbating systemic discrimination. Trotsyuk et al. outlined concerns surrounding authoritarian surveillance and abuse of privacy, data misuse, and worsening inequities in 3 domains-drug and chemical discovery, generative models for synthetic data and ambient intelligence. They detail a multi-pronged framework to mitigate these risks by relying on existing ethical frameworks, regulatory measures, pre-built AI solutions, or design-specific solutions. If these avenues prove inadequate, they recommend researchers look to alternative approaches to accomplish their goal.

Is the Program Implemented Fairly? How Can the Benefits and Burdens of A Program Be Fairly Balanced?

Fairness and the balance of burdens and benefits must also be confronted directly for AIS. We endorse Herington and Cho’s justice-first approach, but argue that it should be operationalized within Kass’s public health framework rather than in abstract principle. From a procedural justice perspective, framing AIS as a public health intervention has practical implications for governance. Oversight should look less like ad hoc technology adoption and more like that of a coordinated large-scale screening or environmental health program, with formal institutional review processes that include ethics and privacy experts and stakeholders from across the institution, including patient representatives. These processes should be present with the goal of maximizing benefit, minimizing harms and remain iterative in nature. It should make explicit goals for AIS deployment, specify which metrics constitute success, commit to ongoing monitoring for unintended consequences, and have clear mechanisms through which concerns may be raised and addressed in a timely manner.

Distributive justice raises a different set of questions, balancing the benefits and burdens of such a technology. Some questions that may arise: who and what will AIS track? How will outputs be used-will AIS be used for quality improvement and patient safety functions or also inform performance evaluations? While institutional values and needs inevitably shape many of these choices, they should not be left to a few decision-makers-ethicists can play a key role in clarifying options, outline tradeoffs, and propose safeguards. A public health framework insists that such decisions be made transparently, with explicit attention to how benefits and burdens are distributed across different groups rather than simply accepted as collateral effects of innovation. It also underscores a basic proportionality claim: the greater the burdens imposed by AIS, the stronger the justification needed to demonstrate an even greater benefit.

Herington and Cho are right to push beyond traditional bioethics and place justice at the center of ethical evaluation of AIS. An overarching public health ethics framework complements their justice-first approach by providing a concrete, operational set of questions that institutions can apply when deciding whether and how to deploy AIS. It acknowledges that AIS is not just another clinical tool, but a socio-technical infrastructure that alters the conditions of care for everyone who enters the space. Treating AIS as a public good for public health, rather than a series of isolated clinical innovations, may better align with the scale of its promise—and its risks.

Kate Luenprakansit, MD and Kevin Schulman, MD, MBA


Source link
Exit mobile version