Skip to Main Content

March/April 2024  Volume 5Issue 2
Health Law Connections

Artificial Intelligence and the Standard of Care in the Post-Acute and Long Term Care Setting

This Feature Article is brought to you by AHLA's Post-Acute and Long Term Services Practice Group.
  • March 01, 2024
  • Jeff Duncan , Lane Powell PC
AI

On October 30, 2023, President Biden issued an Executive Order titled “Safe, Secure and Trustworthy Artificial Intelligence.”1 The Executive Order begins with the pronouncement that “Artificial intelligence (AI) holds extraordinary potential for both promise and peril” and that broadly casts this mixed shadow from “hiring to housing to healthcare.” That AI-related Executive Order came less than a month after the U.S. Senate Committee on Finance announced its investigation related to “[a]n alarming report” by the Department of Health and Human Services (HHS) Office of Inspector General “that numerous Medicaid Managed Care Organizations (MCOs) had staggeringly high rates of denial of health services for patients.”2 The Finance Committee expressly singled out concerns related to Medicaid MCOs’ use of AI tools as part of prior authorization processes. To that end, the Finance Committee stated it was seeking “documentation and answers to a series of questions, including . . . a description of all algorithms, including machine learning and artificial intelligence algorithms, that a company used in prior authorization decisions separated by approvals, partial denials, and full denials for 2018-2022 for any and all MCOs that a company operated within that time frame.”3 These governmental actions mirror a distrust of AI among the public for diverse reasons ranging from individual privacy concerns to real or perceived conflicts of interest between AI tool designers and those impacted by AI tools to frequent reports of AI-related errors that amplify these concerns.4 Paired with similar governmental interventions and ongoing (if not increasing) public distrust of AI, these concerns help highlight for the senior living and long term care (SL/LTC) industry that the rapidly expanding development, experimentation, and use of AI bring optimism for potential advancements in resident and patient care and services, but not without risks and uncertainty.

“Promise” and Potential “Peril” for Patient and Resident Care

For SL/LTC providers and those who receive SL/LTC care and services (as well as anyone who may one day receive such care and services), the “promise” of AI appears vast. AI is often touted as providing effective ways, in combination with related devices, to monitor health and detect (or even predict) changes in condition. Sensors, including those built into “wearable”5 devices, monitor changes in wandering,6 breathing7 and cardiovascular issues,8 wounds,9 and even sleep patterns.10 Similarly, AI is said to provide timely assistance with medication management,11 including something as straightforward, but vital, as providing reminders for when to take medication, as well as notifying providers where a medication administration has been missed12 or even where a dosage varied from a physician’s orders.13 By utilizing analytics to interpret patients’ preferences related to meals, social activities, and treatment options, AI is also said to offer the “promise” of better compliance with SL/LTC providers’ duty to provide patient-centered care and to uphold resident rights.14

But for SL/LTC providers, and for those who provide SL/LTC-related services (such as attorneys and insurers), AI’s increasingly rapid development15 and associated concerns also present potential “perils.” For example, an insurance-focused class action initiated in late 2023 alleges the “illegal deployment of artificial intelligence (AI) in place of real medical professionals to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans by overriding their treating physicians’ determinations as to medically necessary care based on an AI model that Defendants know has a 90% error rate” and that the Defendants “systematically deploy an AI algorithm to prematurely and in bad faith discontinue payment for healthcare services for elderly individuals with serious diseases and injuries.”16 While this lawsuit alleges a use of AI that purportedly results in some patients being deprived of necessary health care treatment, risks also exist where AI tools could result in some patients receiving unnecessary, and potentially harmful, health care treatment.

Researchers assessing the use of an AI tool to detect sepsis in patients concluded the AI tool generated thousands of false positive sepsis alerts.17 For SL/LTC residents who are at risk for sepsis, timely treatment is essential. Because of the extremely serious nature of sepsis (“a condition in which bacteria invades the bloodstream . . . and can lead to septic shock and death if not treated”),18 SL/LTC providers must be aggressive in identifying patients who are at risk for sepsis. But for persons who reside in SL/LTC settings, a transfer to a hospital emergency room for sepsis testing, potential admission, and treatment can also pose various, potentially adverse, risks, such as transfer trauma.19 SL/LTC residents who are transferred to the hospital also face potential exposure to viruses that may be present at the hospital,20 which, in turn, might create risk of infectious disease exposure to other SL/LTC residents who encounter the hospitalized patient post-discharge. Accordingly, SL/LTC providers who, for example, may utilize AI tools to help identify residents at risk for sepsis, are faced with balancing the risks to residents who are not promptly transferred to an emergency room for necessary treatment with the risks to those “false positive” residents who are unnecessarily transferred to a hospital.

AI Tools and the Standard of Care

One method to potentially assist SL/LTC providers in weighing the benefits and risks of AI tool utilization is to consider whether use of the specific AI tool meets the standard of care. Facilities participating in the Medicare program, for example, must meet standards for quality of care as follows: “Based on the comprehensive assessment of a resident, the facility must ensure that residents receive treatment and care in accordance with professional standards of practice, the comprehensive person-centered care plan, and the resident’s choices . . . .” broadly across the spectrum of care, from vision and hearing to skin integrity to accidents, among others.21 Not surprisingly, then, in regulatory appeals in which the Centers for Medicare & Medicaid Services (CMS) alleges a SL/LTC resident suffered harm as a result of nursing facility regulatory noncompliance, the HHS Departmental Appeals Board decisions that are adverse to nursing facilities often find a lack of substantial regulatory compliance where the evidence shows the nursing facilities breached professionally recognized standards of care.22 Thus, AI tools that can be shown, for example, to analyze resident accident-related data to mitigate risk of injury in a manner that meets the professionally recognized standard of care, may tip the scale toward utilization of such AI tools in a SL/LTC environment. On the other hand, AI tools that have not yet been shown to meet the standard of care may cause providers to hold off, pause, or delay introducing them into their SL/LTC settings.

Determining what health care treatments and technologies—especially rapidly developing treatments and technologies—meet the standard of care can be challenging. When such standard of care issues are litigated and involve “somewhat alien terminology” and “technological complexities,” expert testimony on these issues is usually required.23 The use of AI in SL/LTC—which involves AI-specific “alien terminology” such as “bias,” “emergent behavior,” “generative AI,” “hallucination,” “large language model,” “natural language processing,” “neural network,” “parameters,” “reinforcement learning,” and “transformer model”24 and AI-specific “complexities”—related to such things as imaging, diagnostics, and workflow optimization25—seems to fall squarely within the types of matters where an expert opinion could assist in analyzing whether the use of a particular AI tool, for a specific purpose, and with respect to specific patients, could meet the standard of care.

Further complicating the standard of care analysis in the context of SL/LTC settings and AI tools, health care-related standards of care are often fluid, evolving over time based on various factors—some of which only come to light after considerable time has elapsed. As a recent and disconcerting example, the standard of care for the treatment of pain has significantly evolved over several decades. In a 1996 joint statement, the American Academy of Pain Medicine and the American Pain Society opined: “The trend is to adopt laws or guidelines that specifically recognize the use of opioids to treat intractable pain. These statements serve as indicators of increased public awareness of the sequelae of undertreated pain and help clarify that the use of opioids for the relief of chronic pain is a legitimate medical practice.”26 Similarly, in 1998, The Federation of State Medical Board adopted “Model Guidelines for the Use of Controlled Substances for the Treatment of Pain,” which approved (some may say aggressively approved) opioid use for chronic pain.27 Even as concerns regarding opioid abuse increased, concerns that chronic pain was not being adequately treated persisted.28 Over time, however, the related standard of care evolved: “Doctors began to take a more careful approach to opioid prescribing, and opioid prescribing rates began to decline.”29 This example is not intended to suggest whatever risks may be associated with AI tools are akin to the risks associated with opioid use. Rather, because the use of AI tools in health care and related services is relatively new and rapidly developing, SL/LTC providers might reasonably expect an evolving impact on the standard of care for AI tool use in SL/LTC settings. This evolution in the standard of care may be based not only on adjustments to AI health care tools designed to address real or perceived “perils,” but also on evolving public perceptions of AI tools in SL/LTC settings as AI becomes increasingly commonplace (and potentially more or less trusted) over time.

Evolving Regulatory Landscape

As AI tools used in SL/LTC settings advance, it is possible, if not likely, that regulations may also evolve and help provide clarity as to related standards of care for the use of specific AI tools. One of the common concerns related to the use of AI tools (and how such tools access, analyze, store, and share patient health care data) in SL/LTC settings is that patient privacy may be compromised. For example, “AI-enhanced” sensors have been negatively associated not only with concerns of potential “depersonalization of care,” but also with “concerns about surveillance and data confidentiality.”30 However, and with such privacy-related concerns in mind, AI tools could be designed, built, and utilized to meet, for example, standards for safeguarding protected health information under the Health Insurance Portability and Accountability Act (HIPAA) privacy rule,31 as well as regulations that require SL/LTC providers to protect patients’ rights to privacy and dignity.32 Thus, to the extent such rules and regulations help establish a standard of care, SL/LTC providers who utilize AI tools specifically designed to also address privacy concerns by meeting these regulatory requirements, might have reasonable assurance that they are taking appropriate steps to meet privacy-related standards of care.

In 2023, for example, the HHS Office of the National Coordinator for Health Information Technology (ONC) issued a “Notice of Proposed Rulemaking (NPRM) for public comment on proposals to implement certain provisions of the 21st Century Cures Act (Cures Act) and make several enhancements to the ONC Health IT Certification Program (Certification Program) to advance interoperability, improve transparency, and support the access, exchange, and use of electronic health information.”33 The proposed rule leans heavily into AI issues, stating, for example, that the “National Academy of Medicine (NAM) described in a 2019 report how predictive models and other forms of artificial intelligence (AI) have the potential to represent the ‘payback’ of using health IT ‘by facilitating tasks that every clinician, patient, and family would want, but are impossible without electronic assistance.’”34 The proposed rule also emphasizes the need for balancing benefits and risks when regulating AI, noting, for example, 2020 guidance issued by the Office of Management and Budget directing that “[w]hen considering regulations or policies related to AI applications, agencies should continue to promote advancements in technology and innovation, while protecting American technology, economic and national security, privacy, civil liberties, and other American values, including the principles of freedom, human rights, the rule of law, and respect for intellectual property.”35

Such rules and regulations emphasizing “transparency” could assist the SL/LTC industry in analyzing the “promise” and “perils” associated with the use of AI tools in providing care and services and also may help to build trust of AI in patients and residents.

Conclusion

Even though AI technology is not brand new from a broad historical perspective,36 its use in SL/LTC settings is only in its infancy.37 Whether and when CMS might include standards for the use of AI tools in its conditions of participation remain to be seen. However, as the population of individuals requiring SL/LTC care and services continues to grow, and as staffing challenges for SL/LTC are expected to continue, it seems inevitable that AI tool use in SL/LTC settings will increase. As part of that transition, SL/LTC providers will likely need to weigh the risks and benefits of each AI tool. Assessing whether the use of a specific AI tool, for a specific SL/LTC resident, and for a specific purpose meets the standard of care will likely be a key component of the risk/benefit analysis and, ultimately, the decision whether to adopt such AI tools in SL/LTC settings.


Jeff Duncan is a Shareholder at Lane Powell PC. Jeff represents health care companies and providers, with an emphasis on the senior housing, post-acute, and long term care industry. Jeff advises clients with regard to regulatory compliance, risk management, HIPAA/privacy issues, employment matters, and complex involuntary discharges. He also represents clients in administrative hearings against agency actions and in civil court against wrongful death and other claims.


This Feature Article is brought to you by the Post-Acute and Long Term Services Practice Group: Gabriela Sanchez, Lane Powell PC (Chair); Denise Bloch (Vice Chair—Education); Peggy Kozal, Dickinson Wright PLLC (Vice Chair—Member Engagement); Mysty Blagg, Shipman & Wright, LLP (Vice Chair—Education); Alan Horowitz, Arnall Golden Gregory LLP (Vice Chair—Education); and Joseph Zielinski, CarDon & Associates (Vice Chair—Education).


2 U.S. Senate Committee on Finance, Press Release, Wyden, Pallone Launch Investigation into Medicaid Managed Care Plan Prior Authorization Practices (Oct. 3, 2023), https://www.finance.senate.gov/chairmans-news/wyden-pallone-launch-investigation-into-medicaid-managed-care-plan-prior-authorization-practices.

3 Id. (emphasis added.)

4 Cynthia Dwork, Martha Minow, Distrust of Artificial Intelligence: Sources & Responses from Computer Science & Law, 151 Daedalus 309-321 (2022), https://direct.mit.edu/daed/article/151/2/309/110609/Distrust-of-Artificial-Intelligence-Sources-amp.

5 Nat’l Insts. of Health, Transformative Technologies: Wearable Sensors (Nov. 16, 2023), https://www.nih.gov/about-nih/what-we-do/nih-turning-discovery-into-health/transformative-technologies/wearable-sensors.

6 Adesh Kumar Agrawal et al., Approach to Management of Wandering in Dementia: Ethical and Legal Issue, 43 Indian J. of Psychological Med. 53-59 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8543604/.

7 Junaid Bajwa , Artificial intelligence in healthcare: transforming the practice of medicine, 8 Future Healthcare J. 188–94 (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8285156/.

8 Wearable technology and the cardiovascular system: the future of patient assessment, The Lancet (July 2023), https://www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00087-0/fulltext.

9 Anisuzzaman, DM et al., Image-Based Artificial Intelligence in Wound Assessment: A Systematic Review, 11 Adv. Wound Care 687-709 ( 2021), https://pubmed.ncbi.nlm.nih.gov/34544270/.

10 Nathaniel F. Watson and Christopher R. Fernandez, Artificial intelligence and sleep: Advancing sleep medicine, 59 Sleep Med. Rev. (2021), https://pubmed.ncbi.nlm.nih.gov/34166990/.

11 Marja Härkänen et al., Artificial Intelligence for Identifying the Prevention of Medication Incidents Causing Serious or Moderate Harm: An Analysis Using Incident Reporters’ Views, 18 Int’l J. Environ. Res. Pub. Health. (2021), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8431329/.

12 Daniel L. Labovitz et al., Using Artificial Intelligence to Reduce the Risk of Nonadherence in Patients on Anticoagulation Therapy, 48 Stroke 1416–1419 (2017), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5432369/.

13 Jessica Kent, Artificial Intelligence Detects Medication Administration Errors, Health IT Analytics (Mar. 25, 2021), https://healthitanalytics.com/news/artificial-intelligence-detects-medication-administration-errors.

14 Aurelia Sauerbrei et al., The impact of artificial intelligence on the person-centered, doctor-patient relationship: some problems and solutions, 23 BMC Med. Informatics and Decision Making (2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10116477/.

15 Adam Bohr and Kaveh Memarzadeh, The rise of artificial intelligence in healthcare applications, Artificial Intelligence in Healthcare 25-60 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7325854/.

16 Lokken v. UnitedHealth Group, Inc., No. 0:23-cv-03514-WMW-DTS (D. Minn., filed Nov. 4, 2023) (emphasis added).

17 Liz Richardson, Artificial Intelligence Can Improve Health Care—but Not Without Human Oversight: Study of sepsis detection software underscores need for guidance on implementation and monitoring, Pew, Dec. 16, 2021, https://www.pewtrusts.org/en/research-and-analysis/articles/2021/12/16/artificial-intelligence-can-improve-health-care-but-not-without-human-oversight.

18 Antelope Valley Convalescent Hosp., DAB No. CR511, at 13 (1997).

19 Kyrsten Costlowa and Patricia A. Parmeleea, The impact of relocation stress on cognitively impaired and cognitively unimpaired long-term care residents, 24 Aging & Mental Health 1589-1595 (2019), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7048638/.

20 Caroline Quach, MD MSc et al., Risk of infection following a visit to the emergency department: a cohort study, 184 Canadian Med. Ass’n J. 232–239 (2012), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3291696/.

21 42 C.F.R. § 483.25—Quality of care.

22 See, e.g., Leisure Glen Post Acute Care Ctr., 2022 HHSDAB LEXIS 40, *43, DAB No. 6024 (2022) (“CMS claims that the standard of care at a SNF is for licensed medical professionals to care for its residents, and that, by entrusting Resident 2’s care to a student, Petitioner breached the standard of care . . .”).

23 Cruz v. Am. Nat’l Red Cross, No. 19-cv-1107-EFM, 2021 U.S. Dist. LEXIS 94694, at *4 (D. Kan. May 19, 2021).

24 Adam Pasick, Artificial Intelligence Glossary: Neural Networks and Other Terms ExplainedN.Y. Times, Mar. 27, 2023.

25 Sara Gerke et al., Ethical and legal challenges of artificial intelligence-driven healthcare, Artificial Intelligence in Healthcare 295-336 (2020), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7332220/.

26 Pharmaceutical (Re)Capture, 20 Yale J. Health Pol’y L. & Ethics 146, 198 (2021), citing Am. Acad. of Pain Med. and the Am. Pain Soc’y, The Use of Opioids for the Treatment of Chronic Pain, Consensus Statement, 6 J. Pharm. Care Pain & Symptom Control 97 (1998).

27 Nicholas E. Hagemeier, PharmD, PhD, Introduction to the Opioid Epidemic: The Economic Burden on the Healthcare System and Impact on Quality of Life, 24 Am. J. Managed Care (May 11, 2018) (quoting the Board as follows: “Physicians should not fear disciplinary action from the Board for prescribing, dispensing or administering controlled substances, including opioid analgesics, for a legitimate medical purpose and in the course of professional practice”), https://www.ajmc.com/view/intro-opioid-epidemic-economic-burden-on-healthcare-system-impact-quality-of-life.

28 A 2011 report authored by the Institute of Medicine (US) Committee on Advancing Pain Research, Care, and Education, entitled Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research (2011), https://pubmed.ncbi.nlm.nih.gov/22553896/, found that pain was a “public health challenge” that “affects tens of millions of Americans and contributes substantially to morbidity, mortality, disability, demands on the health care system, and significant economic burdens for the nation. The prevalence of chronic pain is growing and likely to continue to do so.”

29 City & Cty. of S.F. v. Purdue Pharma L.P., 620 F. Supp. 3d 936, 944 (N.D. Cal. 2022).

30 Kate Loveys et al., Artificial intelligence for older people receiving long-term care: a systematic review of acceptability and effectiveness studies, 3 Lancet Healthy Longev. 286-297. (2022), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8979827/.

31 45 C.F.R. pt. 160 and subpts. A and E of pt. 164.

32 42 C.F.R. § 483.10.

33 HHS ONC, Press Release, Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule (June 20, 2023), https://www.hhs.gov/about/news/2023/04/11/hhs-propose-new-rule-to-further-implement-the-21st-century-cures-act.html.

34 88 Fed. Reg. 23746, 23744 (Apr. 18, 2023) (citing Michael Matheny, et al., Artificial intelligence in health care: the hope, the hype, the promise, the peril, Washington D.C., Nat’l Acad. of Med. (2019)), https://www.federalregister.gov/documents/2023/04/18/2023-07229/health-data-technology-and-interoperability-certification-program-updates-algorithm-transparency-and.

35 Id. at 23775 (citing OMB–EOP—Memorandum for the Heads of Executive Departments and Agencies on Guidance for Regulation of Artificial Intelligence, M–21–06, p. 6 (Nov. 17, 2020)).

36 Nat’l Inst. of Justice, A Brief History of Artificial Intelligence (Sept. 30, 2018), https://nij.ojp.gov/topics/articles/brief-history-artificial-intelligence.

37 Fraser Robinson and Goldie Nejat, An analysis of design recommendations for socially assistive robot helpers for effective human-robot interactions in senior care, 9 J. Rehabil. and Assistive Technol. Eng.1–17 ( 2022) (“It is important to note that this field of [human robot interaction] is still in its infancy . . .”), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9208044/#bibr9-20556683221101389.

ARTICLE TAGS