Addressing HIPAA Concerns for AI Tools: What You Need To Know
Anwita
Jan 21, 2025
As artificial intelligence rapidly proliferates into every aspect of technology, it will bring endless possibilities and conveniences never imagined.
While these possibilities aid medical staff and patients with speed and accuracy that exceed human capabilities, they also present new risks and realities. This particularly holds true when dealing with sensitive medical records that fall within the scope of the HIPAA privacy rule.
We wrote this article to give you a complete breakdown of running an AI program that uses protected health information. By the end, you will understand what it means to have AI in your practice, the risks, the grey areas, and how to stay compliant.
It’s not what you thought… the actual use cases
A common misconception about AI in healthcare is that it can replace care workers in the near future. Based on the progress made to date, it would be dishonest to disregard this as complete mumbo jumbo. AI-powered robots that mimic a surgeon’s hand movement to aid procedures are already used.
The current discussion of AI, however, is not about man vs. machine. Its use cases currently involve making medical decisions, minimizing errors in diagnosing treatments and reducing the risk of human errors.
One significant way AI is changing the healthcare landscape is by scanning large volumes of data to identify patterns that might have escaped the human radar. This way, physicians can improve the accuracy of their diagnosis.
Another positive impact of AI is in drug research. Using advanced algorithms, AI systems can predict the outcome when certain chemical compounds interact, catalyzing the process of inventing new drugs and new treatment possibilities.
With the long list of rewards comes a long list of risks. These risks are creating unpredictable and unprecedented challenges for healthcare providers.
Managing risks in the grey
The healthcare industry is one of the most heavily regulated industries in the world. Despite being tightly woven around a lengthy list of regulations, it has been the most targeted industry for 12 consecutive years for security breaches.
Introducing AI in healthcare involves new responsibilities, challenges, and questions on safely handling ePHI. Embracing AI means holding yourself accountable for protecting patient privacy.
To be clear, HIPAA does not explicitly mention AI or mandate any regulations around its use. However, the privacy and security laws of HIPAA are pretty stringent. It defines the requirements and obligations Covered Entities must follow while sharing, transmitting, and storing ePHI. Non-compliance will land you in trouble with The Office for Civil Rights (OCR), the body responsible for enforcing these laws. And the OCR is not very forgiving when it comes to violations.
Vendors often encounter regulatory hurdles while adopting AI in their systems. One minor slip can trigger a violation and call for scrutiny. This is because using a technology that uses large volumes of data (and, in this case, protected, sensitive data) involves navigating its security and privacy rules.
One way to circumvent this is to use de-identified PHI, also known as data repurposing. But doing so affects the quality of the data, making it suboptimal for improving outcomes and slowing down the learning curve. Protecting patient privacy while it is sufficiently helpful for AI is a slippery slope. In addition to the complexities, the dynamic nature of AI makes it difficult to maintain continuous compliance.
Another challenge with AI is pinpointing accountability. As AI becomes more autonomous and complex, the question of responsibility becomes a grey area. Despite being just a tool, AI interacts with sensitive health data. Unlike other tools, AI operates on a self-learning model.
This raises the question of sentience; though AI is designed and programmed to run in a certain way, should humans be accountable for failing to comply with regulations? From a compliance perspective, this is a grey area that necessitates a problematic dialogue between AI developers, healthcare professionals, and regulators.
For vendors offering AI-powered tools or services, implementing additional layers of security to avoid non-compliance with HIPAA and other regulatory frameworks is non-negotiable.
For developers working on AI models where PHI is centric to the model, it is critical to program it in a way that considers its interactions with sensitive data in a HIPAA-compliant manner. Apart from implementing the right technical measures and security controls, developers should take ethical considerations into account and sufficiently de-identify data sets.
Balancing benefits and risks: practical steps to avoid legal mishaps
Working with legally protected data sets and a rapidly evolving field like AI can be tricky. Mistakes are costly – literally. If you are a healthcare provider, understanding the implications of the regulatory landscape is critical to running your business without stepping on the penalty landmine.
So, how can you reap the benefits of AI without risking ethics and compliance?
Start by evaluating your existing policies, agreements, and procedures for data collection, sharing, transmitting, and other use cases. Check if these sufficiently cover or address the use of PHI for AI usage and development. If the policies fail to cover the use cases, create new ones.
If you are a covered entity, review your business associate agreements (BAA) and update the language to cover the risks associated with using PHI in its scope. Another important document to share with other covered entities and business associates is the code of conduct that details how you plan to use PHI for AI development.
Next, move on to the basics of security: encryption and anonymization. Securing ePHI at rest and in motion is a HIPAA requirement under physical safeguards. Deploy your AI model on a secure server and implement security controls to protect connections with external access points connecting patients with a care tool.
Apart from encryption, anonymization techniques render data unreadable to unauthorized users. De-identify your PHI and train the AI model on anonymized data sets. The HHS recommends the expert determination and safe harbor method for de-identifying data sets.
A major concern HIPAA seeks to address is authorization, which is detailed explicitly in the privacy law. It ensures that covered entities do not share, collect, use, or disclose medical information without the patient’s consent.
This rule does apply with exceptions in cases of treatment, payment, and medical operations (TPO). Given that AI training does not fall within the scope of TPO, business associates or covered entities collecting PHI must get explicit patient consent.
While all the steps above help to avoid non-compliance and, ultimately, legal trouble, they are not enough. You are responsible for ensuring that all internal controls and measures are functioning effectively.
Conduct HIPAA risk assessments to identify vulnerabilities and threats to the integrity and confidentiality of the data sets. Assessments are critical, especially when 1) changes are introduced in HIPAA’s privacy and security laws and 2) when AI undergoes any significant changes.
Manage AI healthcare systems in a HIPAA-compliant way
HIPAA regulations can feel overwhelming, especially when AI is involved. But staying compliant isn’t optional—and ignorance comes at a high cost. The Office for Civil Rights (OCR) enforces HIPAA strictly, so it’s critical to be prepared.
That’s where Sprinto steps in. Our platform simplifies compliance and keeps you confidently aligned with HIPAA regulations. Sprinto transforms complex requirements into clear, manageable steps by automating the entire compliance process.
With Sprinto, you gain:
- Editable Policy Templates: Customize policies that align with the latest HIPAA standards.
- Built-In Employee Training: Keep your team up-to-date with training modules that reflect the latest changes.
- Real-Time Compliance Dashboard: Track your compliance status, identify gaps, and tackle tasks to stay audit-ready.
- Integrated Risk Management: Continuously and comprehensively monitor your AI environment to uphold the technical and administrative safeguards.
Ready to make compliance stress-free? Let’s kickstart your HIPAA journey today!


Use Sprinto to centralize security compliance management – so nothing
gets in the way of your moving up and winning big.