At the Health Care Compliance Association’s recent Annual Compliance Institute in Nashville, artificial intelligence (AI) dominated the conversation.
A common theme: how does the compliance officer keep pace with this rapidly evolving technology and its use within the organization? We tackle this question in a special two-part series on how to incorporate AI into your organization.
This week in Part I of our series, I discuss the compliance risks posed by artificial intelligence and policies an organization should have to manage these risks. The good news – you probably already have them! They include:
- Patient Safety & Quality Assurance Policies
- Acceptable Use Policy
- Medical Record Documentation Policies
- Use & Disclosure of Protected Health Information
- HIPAA Security Risk Assessment
- Informed Consent & Research Policies
- Recording & Surveillance Policies
- Employee Handbook
- Contract Review Policy
- Compliance Policies Relating to Billing, Coding & Kickbacks
Although a provider shouldn’t have to create a whole new set of policies to implement AI, these policies may need some edits to incorporate unique issues raised by the use of AI within the organization. Additionally, providers should consider adopting an AI policy that cross-references relevant policies and provides more detail on the provider’s implementation of AI technologies within the organization. Check out our sample AI policy on our resources page.
Next week, in Part II of our series, we’ll discuss how to operationalize these policies to mitigate compliance risks.
Subscribe to our podcast.
Get our latest posts by email.