Artificial Intelligence in Health Care
This Bulletin is brought to you by AHLA’s Fraud and Abuse Practice Group.
- August 19, 2021
- Michelle Frazier , Advocate Aurora Health Care
- Carly Boder , Marquette University Law School
- Katie Muth , Michael Best & Friedrich LLP
Health law is a dynamic practice because of the changing landscape of health care. One of the latest drivers of this change is Artificial Intelligence (AI). AI is the “study and design of intelligent agents or computer systems that perceive their environment in some manner and respond with actions to maximize their chances of success.”[1] AI’s current role in health care includes applications in diagnostic and treatment decisions, patient interaction, and administrative compliance.[2] The use of data in AI initiatives is boundless, bringing enormous potential for innovation. This potential is, however, one that needs to be considered against the potential risks associated with AI efforts.
AI in health care operates by applying the principles of AI to large data sets, such as those found in electronic health records, to create “if/then” rules that assist with clinic decision support and other matters.[3] Alongside possibilities for innovation, AI creates risks associated with patient privacy and inappropriate uses of patient data, which must be kept top of mind.
Value of Artificial Intelligence Data
Although use of AI in health care is relatively new, proposed uses suggest the collection of large amounts of data and unlimited use potential. First, artificial intelligence can be used for the extraction of data and structures.[4] In doing so, systems are able to extract qualitative and quantitative measures from clinical data collected from patients and develop treatment suggestions pursuant to individualized patient factors such as insurance coverage, interaction with other medication, adherence to medical advice, and other social determinatives.[5] In addition, AI provides a streamlined administrative process by reconciling gaps in medical records and producing automated fillers and responses.[6] The aggregation of thousands of patients’ data into one single record creates streamlined processes for health systems that increases quality of care while maximizing profits.[7]
Prior to the evolution of AI technology, people rarely thought of patient data as something with independent value. However, as the use of AI grows and large-scale data sets emerge, the patient data itself may come to hold tremendous value.
There are two primary approaches to quantifying the value of patient data.[8] The first approach is a market-based approach that calculates the “per record” valuation of data sets.[9] This is the type of valuation used in calculating the value of patient information in the context of privacy breaches.[10] The second is an income-based approach that quantifies the value based on the economic value generated from the data.[11] In thinking about data as remuneration under fraud and abuse laws, it is appropriate to utilize the income-based approach to quantify patient data. An analysis of the United Kingdom’s patient records held in the National Health Services (NHS), suggests that 55 million patient records currently held by NHS has a market value of several billion pounds, for example.[12]
Fraud and Abuse Considerations
The federal Stark Law was created to combat the rising costs of health care and prohibits physicians from referring patients for “designated health services” to any entity with which they hold a financial relationship.[13] While there are several exceptions to the Stark Law, the law is a strict liability statute,[14] triggering liability if an exception is not met, including inadvertent violations.[15] It is important to understand how the Stark Law defines remuneration when assessing the value of data used in AI initiatives. A compensation arrangement involves any remuneration between a physician (or an immediate family member of such physician) and an entity.[16] Remuneration is “any remuneration, directly or indirectly, overtly or covertly, in cash or in kind” and includes “forgiveness of amounts owed for inaccurate tests… the provision of items, devices, or supplies… used solely to collect, transport, process, or store specimens for the entity providing the item, device, or supply, or order or communicate the results of tests or procedures for such entity.”[17] Although not much about the Stark Law is simple, the definition of remuneration can be simply stated as anything of value,[18] which could arguably include the value derived from data that is exchanged as part of a physician arrangement.
The definition of remuneration is similar under the federal Anti-Kickback Statute (AKS),[19] which prohibits the use of remuneration to induce or reward patient referrals. The AKS defines remuneration as “the waiver of coinsurance and deductible amounts… and transfers of items or services for free or for other than fair market value.”[20] Generally speaking, the AKS Safe Harbors align with the exceptions under the Stark Law. Unlike the Stark Law, which is a strict liability statute, the AKS requires intent in order to find culpability.[21]
With the definition of remuneration broadly interpreted and applied, the growing value of AI suggests that arrangements involving the exchange of AI should be approached with caution and analyzed under the Stark Law and the AKS. Under the Stark Law, AI, expressed as, among other ways, proprietary, non-public algorithms and information technology and software, may fit the definition “any remuneration, directly or indirectly, overtly or covertly, in cash or in kind,” and under the AKS, the uncompensated exchange of AI could also be classified within “transfers of items or services for free or for other than fair market value.”
Conclusion and Practical Considerations
Both the Stark Law and the AKS are associated with payments for referrals, usually in the context of inflated compensation or kickbacks for business. But remuneration comes in all different forms, and could include data.[22] Framed a certain way, providing data or products or services derived from it to referring providers could be viewed in the same context as providing more familiar and traditional notions of remuneration, such as cash or cash equivalents.
Consider the hypothetical case of a physician on a hospital’s medical staff who requests that the hospital provide her with a large amount of de-identified MRI scan data. The physician would like to utilize the data for independent AI research, which the physician would then incorporate and use in her own practice to improve outcomes and potentially earn the physician better reimbursements from payers. The physician could also create AI products, to then market and license in the commercial market. This data exchange likely does not raise privacy concerns because the data would be de-identified, but the data itself is valuable. The independent physician would not otherwise have access to this body of data, and there is no overarching agreement between the hospital and the referring physician to cover this exchange. The question is whether the requested data constitutes remuneration under the federal Stark Law, in which case the arrangement should be modeled to meet an applicable Stark exception. There are arguments that it may not be remuneration under Stark, but conversely, if it were, the arrangement, among other things, would need to have its material terms documented in a signed writing, and the physician would need to pay fair market value for the MRI data.
As health care continues to evolve its use of AI, as it relates to population health management and medical innovations, providers should be mindful that the exchange of data needed for these purposes should be reviewed, not just from a privacy perspective, but from a fraud and abuse perspective, as well.
Key considerations in the fraud and abuse arena when assessing AI in your environment might be:
- What is your policy or practice for assessing and evaluating requests from persons/entities who are either referral sources or depend on you for referrals when they seek access to data?
- What is your selection process for deciding who gets access and who does not?
- Will the requestor use the data to develop and market products derived, in part, from the data?
- What is your policy or practice for appraising the “value” of the data, for purposes of determining fair market value, if the persons/entities requesting the data are either referral sources or depend on you for referrals?
[1] John Glaser, Understanding Artificial Intelligence in Health Care, American Hospital Association (Jan. 23, 2018), https://www.aha.org/news/insights-and-analysis/2018-01-23-understanding-artificial-intelligence-health-care.
[2] Thomas Davenport & Ravi Kalakota, The Potential for Artificial Intelligence in Healthcare, Future Healthcare J. (June 2019), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/; Rebecca Robbins, An Experiment in End-of-Life Care: Tapping AI’s Cold Calculus to Nudge the Most Human Conversations, Health Tech STAT (July 1, 2020). Hospitals tap AI to nudge clinicians toward end-of-life conversations (statnews.com).
[3] Thomas Davenport & Ravi Kalakota, The Potential for Artificial Intelligence in Healthcare, Future Healthcare J. (June 2019), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/.
[4] Glaser, supra note 1.
[5] Id.
[6] Id.
[7] Id.
[8] Chris Wayman & Natasha Hunerlach, Realizing the Value of Health Care Data: A Framework for the Future, EY Health Sciences and Wellness, https://assets.ey.com/content/dam/ey-sites/ey-com/en_gl/topics/life-sciences/life-sciences-pdfs/ey-value-of-health-care-data-v20-final.pdf.
[9] Id.
[10] Id.
[11] Id.
[12] Id.
[13] 42 U.S.C. § 1395nn(a)(1).
[14] § 43:23. Overview, 3 Health L. Prac. Guide § 43:23 (2020).
[15] § 43:23. Overview, 3 Health L. Prac. Guide § 43:23 (2020).
[16] 42 U.S.C. § 1395nn(h)(1)(A).
[17] 42 U.S.C. § 1395nn(h)(1)(B) – (C).
[18] 42 U.S.C. § 1395nn(h)(1)(C).
[19] 42 U.S.C. § 1320a-7a(i)(6).
[20] A Roadmap for New Physicians: Fraud and Abuse Laws, Office of Inspector General, U.S. Department of Health and Human Services, https://oig.hhs.gov/compliance/physician-education/01laws.asp; 42 U.S.C. 1320a-7b(b).
[21] Id.
[22] A Roadmap for New Physicians: Fraud and Abuse Laws, Office of Inspector General, U.S. Department of Health and Human Services, https://oig.hhs.gov/compliance/physician-education/01laws.asp.