Fair, Appropriate, Valid, Effective, and Safe
At the end of October, the Biden Administration released its comprehensive executive order on Safe, Secure, and Trustworthy Artificial Intelligence. Last week, the White House followed up with fresh commitments to enhance AI transparency, risk management, and responsibility within the healthcare sector. This move comes as a response from over two dozen prominent healthcare organizations, showcasing the principles of “safe, secure, and trustworthy AI applications” in healthcare.
The executive order, released on October 30, includes several healthcare-specific provisions, directing the U.S. Department of Health and Human Services to establish a mechanism for collecting reports on “harms or unsafe healthcare practices.”
On December 14, coinciding with the inaugural day of the HIMSS AI in Healthcare Forum in San Diego, the Biden Administration unveiled voluntary commitments from a cohort of 28 leading providers and payers related to healthcare AI safety and security in the private sector. These organizations pledged to foster more transparent and trustworthy practices in developing and utilizing AI-based tools, aligning their efforts with responsible machine model development.
These commitments aim to align industry actions on AI with the “FAVES” principles, emphasizing that AI should lead to healthcare outcomes that are Fair, Appropriate, Valid, Effective, and Safe. National Economic Advisor Lael Brainard, Domestic Policy Advisor Neera Tanden, and Director of the Office of Science and Technology Policy Arati Prabhakar emphasized the importance of these commitments in advancing health equity, improving access to care, affordability, coordination of care, reducing clinician burnout, and enhancing the overall patient experience.
As part of the agreement, healthcare organizations have committed to informing patients and customers when presenting substantially AI-generated content that humans do not review or edit.
AI’s potential to help people cannot be overstated; this new agreement with the White House only supports that promise. RazorMetrics was founded on the principles of fairness, affordability, Validity, Effectiveness, and Safety. Our AI and machine learning were built with a deep respect and understanding of the physician-patient relationship and experience. Using AI, our proprietary codex, and our client’s formulary, we ensure fairness for patients by giving physicians a frictionless way to prescribe the most affordable drug. Burnout is lessened because we engage physicians within their normal workflows with no extra clicks in the EHR or 3rd party apps or websites. Our development and clinical teams review our processes and clinical recommendations monthly, ensuring the work is valid and safe.
RazorMetrics makes physicians a partner in drug savings, not a casualty of AI.
RazorMetrics’ drug-saving solution is highly effective, and we boast about our 74% response rate from physicians.
Though the White House highlighted the potential risks associated with AI-enabled tools in clinical decision-making, the real risk from AI is to eliminate physicians in the decision-making process. RazorMetrics makes physicians a partner in drug savings, not a casualty of AI. The private sector’s commitments represent a critical step in the broader effort to advance AI for the health and well-being of Americans, and RazorMetrics is fully behind this effort.
The 28 Providers and Payers that signed onto the AI commitments:
- Allina Health
- Bassett Healthcare Network
- Boston Children’s Hospital
- Curai Health
- CVS Health
- Devoted Health
- Duke Health
- Emory Healthcare
- Endeavor Health
- Fairview Health Systems
- Hackensack Meridian
- HealthFirst (Florida)
- Houston Methodist
- John Muir Health
- Keck Medicine
- Main Line Health
- Mass General Brigham
- Medical University of South Carolina
- Oscar Health
- OSF HealthCare
- Premera Blue Cross
- Rush University System for Health
- Sanford Health
- Tufts Medicine
- UC San Diego Health
- UC Davis Health
- WellSpan Health