Artificial Intelligence continues to move quickly into the clinical workflow.

WVU Medicine recently announced that it is expanding its use of Abridge, an AI-powered transcription platform designed to generate clinical documentation during patient encounters. What began as a small pilot program in 2025 has now grown to more than 1,200 physicians, nurse practitioners, and physician assistants using the system across the organization, with more than 1,600 additional clinicians eligible to adopt the technology across WVU Medicine’s network of hospitals and outpatient facilities.

The platform works by recording the conversation between a clinician and a patient and then generating a draft clinical note that can be integrated directly into the Epic EHR system.

According to WVU Medicine leadership, clinicians using the platform save an average of approximately 11 minutes per day that would otherwise be spent converting rough notes into finalized documentation. Some clinicians have also reported lower stress levels and reduced burnout associated with the administrative burden of charting.

From an operational standpoint, tools like this have obvious appeal. Documentation requirements continue to grow, and anything that allows physicians to spend less time charting and more time interacting with patients is likely to gain traction across healthcare systems.

But while the technology may be new, the legal expectations surrounding physician responsibility remain unchanged.

AI Tools Do Not Replace Physician Responsibility

As I discussed in my recent article to recap my presentation at PBI’s Health Law Institute, artificial intelligence tools should be understood as assistive technologies, not decision-makers.

Even when AI is used to generate clinical notes or assist with documentation, the physician remains responsible for the accuracy of the medical record.

In practice, that means clinicians still need to carefully review AI-generated documentation before finalizing the chart. These systems can dramatically improve efficiency, but they are not perfect, and errors in the medical record can create both clinical and legal risk. Courts evaluating malpractice claims will ultimately focus on the physician’s conduct, not the performance of the software.

In other words, if an AI-generated note contains inaccurate information that contributes to an adverse outcome, the question will not be whether the software made a mistake. The question will be whether the physician exercised appropriate professional judgment in reviewing and approving the documentation.

Compliance and Privacy Still Apply

There are also important compliance considerations when AI tools are integrated into patient encounters.

Because platforms like Abridge process sensitive patient conversations, healthcare systems must ensure that appropriate HIPAA protections, Business Associate Agreements, and patient consent procedures are in place. WVU Medicine has indicated that patient consent is required before recording begins and that the transcriptions are protected under HIPAA in the same way as any other medical documentation.

These safeguards are essential as healthcare organizations increasingly rely on AI-powered platforms to assist with clinical operations.

The Bottom Line

AI documentation tools like Abridge may meaningfully reduce administrative burden and allow physicians to reclaim time during the clinical day. That is a positive development in a healthcare system where documentation requirements have steadily increased for years.

However, the presence of artificial intelligence does not change the fundamental legal expectations placed on physicians.

Technology can assist with documentation and workflow, but the physician ultimately remains responsible for the accuracy in the final chart and the clinical decisions reflected in it.

As these tools become more common, the most prudent approach for healthcare providers is to view AI as a powerful assistant, but not a substitute for careful review and professional judgment.