Premium

Microsoft unveils new AI assistant ‘Dragon Copilot’: How safe are AI tools in healthcare?

Microsoft’s new AI assistant, Dragon Copilot, promises to reduce doctors’ workloads. A look at how it works.

India is using AI for translation services, getting questions answered, increasing efficiency at work, and helping students with schoolwork. (Image: FreePik)Zia LLM is currently undergoing internal testing. (Image: FreePik)

Microsoft has introduced a new voice-activated AI assistant designed to help doctors and healthcare professionals transcribe clinical notes and draft paperwork as well as quickly search for information from medical sources.

The new healthcare AI tool, Dragon Copilot, is being offered as part of Microsoft Cloud for Healthcare. It harnesses the natural language voice dictation and ambient listening technology developed by AI voice company Nuance, which Microsoft acquired for $16 billion in 2021. These capabilities have been further fine-tuned using generative AI and adapted to incorporate healthcare safeguards, the company said in a blog post published on Monday, March 3.

The rise of generative AI tools for the healthcare industry is driven by the need to streamline administrative tasks, improve efficiency, and combat burnout among clinicians. Microsoft has emerged as a major player in the fast-growing market for AI note-taking tools.

Story continues below this ad

“No one becomes a clinician to do paperwork, but it’s becoming a bigger and bigger administrative burden, taking time and attention away from actually treating and supporting patients. That’s why we’re introducing Microsoft Dragon Copilot, the industry’s first AI assistant for clinical workflow,” Microsoft CEO Satya Nadella said in a post on X.

Here’s a closer look at how Dragon Copilot works, what it can do, and the risks healthcare AI tools pose.

What is Dragon Copilot?

Microsoft Dragon Copilot is built on top of existing tools such as Dragon Medical One (DMO) and DAX rolled out by speech recognition company Nuance Communications.

According to Microsoft, DMO’s speech capabilities have helped clinicians transcribe “billions of patient records” while DAX’s ambient AI technology has assisted over 3 million ambient patient conversations across 600 healthcare organisations in the past month alone.

Story continues below this ad

Microsoft said the underlying architecture of Dragon Copilot enables organisations to deliver enhanced experiences and outcomes across care settings for providers and patients alike.

“With the launch of our new Dragon Copilot, we are introducing the first unified voice AI experience to the market, drawing on our trusted, decades-long expertise that has consistently enhanced provider wellness and improved clinical and financial outcomes for provider organisations and the patients they serve,” Joe Petro, the corporate vice president of Microsoft Health and Life Sciences Solutions and Platforms, said in a statement.

How to use Dragon Copilot?

Dragon Copilot can be used to draft memos and notes in a personalised style and format, as per Microsoft. Besides voice-to-text transcription, the Dragon Copilot user interface also allows users to submit prompts or use templates to create AI-generated notes.

Apart from documentation work, the AI assistant allows clinicians to search for general-purpose medical information from trusted sources. It can also be used to automate key tasks such as conversational orders, note and clinical evidence summaries, referral letters, and after-visit summaries, in one centralised workspace, Microsoft said.

Story continues below this ad

To substantiate its claims of Dragon Copilot’s utility in healthcare spaces, Microsoft provided findings from its survey, which revealed that the AI assistant helped clinicians save five minutes in every patient interaction. “Around 70% of clinicians reporting reduced feelings of burnout and fatigue, 62% of clinicians stated they are less likely to leave their organisation, while 93% of patients report a better overall experience,” Microsoft said.

Dragon Copilot will reportedly be accessible through a mobile app, browser or desktop, and it integrates directly with several different electronic health records. The AI assistant will be available in the US and Canada in May this year. Soon after, it will be launched in the UK, Germany, France, the Netherlands, and other key markets.

However, Microsoft did not provide details on the cost of accessing Dragon Copilot.

Rise of healthcare AI tools

Startups and tech companies are racing to introduce generative AI tools and hardware for the healthcare industry. On Monday, Google Cloud published a blog post outlining how healthcare providers are using the tech giant’s offerings in medical settings. Players such as Basalt Health are using Vertex AI, Gemini, and other Google Cloud technologies to create AI agents that support medical assistants in preparing patient charts, performing administrative tasks, and flagging potential health risks.

Story continues below this ad

Startups such as Abridge and Suki have raised over $460 million and nearly $170 million, respectively, for developing similar AI scribing tools.

However, generative AI tools in healthcare carry the same risk of large language models (LLMs) hallucinating or making things up. This risk becomes even more critical in a medical setting where patient lives are potentially on the line.

Hidden risks of healthcare AI tools

The Associated Press found that OpenAI’s Whisper tool for transcribing doctors’ consultations with patients, was prone to hallucinations. Whisper made up chunks of text that included racial commentary and violent rhetoric as well as imagined medical treatments, the report claimed, citing interviews with several software engineers, developers, and academic researchers.

While acknowledging the potential applications of AI devices in healthcare, the US Food and Drug Administration (FDA) identified hallucinations in AI models as a challenge.

Story continues below this ad

“For example, for a GenAI-enabled product that may be meant to summarise a patient’s interaction with a health care professional, the possibility of that product hallucinatingcan present the difference between summarising a healthcare professional’s discussion with a patient and providing a new diagnosis that was not raised during the interaction,” the FDA said in a report last year titled ‘Total Product Lifecycle Considerations for Generative AI- Enabled Devices’.

It also pointed out that foundational AI models “may be susceptible to bias that may be especially difficult for individual product developers to identify or mitigate for their resulting GenAI-enabled products”.

On Dragon Copilot, Microsoft has said that “healthcare-specific clinical, chat and compliance safeguards for accurate and safe AI outputs” have been incorporated into the AI assistant. The company added that Dragon Copilot has been developed in line with Microsoft’s responsible AI principles.

It has not elaborated on how the healthcare AI tool addresses risks of hallucination and performance bias.

Latest Comment
Post Comment
Read Comments
Advertisement

You May Like

Advertisement