Speech Recognition Gaining Ground in Health Care

By Cindy Atoji

July 22, 2008 | The speech recognition market in health care is expected to more than double in size between now and 2013, according to Datamonitor, a growth driven by increasing use of technology and the lure of potential cost-savings. Using speech recognition for not just transcription and dictation, but clinical decision support, will take voice to the next level, says Klaus Stanglmayr, strategic product marketing manager for Philips Speech Recognition Systems, which has over 8,000 installations worldwide of its SpeechMagic document creation platform. “Improvements in speech recognition have allowed it to be used in conjunction with electronic medical records, picture archiving, workflow applications, and more,” says Stanglmayr. Digital HealthCare & Productivity spoke with Stanglmayr about improvements in speech recognition and how it is increasing workflow efficiency and productivity.

DHP: How far has speech recognition come?
STANGLMAYR: I joined Philips 12 years ago, and things have changed quite significantly. In the beginning, you had to spend an hour or more to train the system, but now, it takes only a few minutes, and there are a few users who don’t train at all. The system uses intelligent language modeling software, so every time you use it, it will adapt and optimize to your particular way of speaking. Accuracy rates have improved, increasing the quality of the reports. If you look at radiology, for example, users will dictate, correct errors, and then sign-off, instead of having to record and then send it off to a transcriptionist, which has cut down on turn-around time.

There are still improvements needed, of course. The system works really well when vocabulary and the consistency of the use of words are a given. We’re looking at different emerging technologies as major areas of research. Doctors are becoming more mobile—they don’t just want to dictate in front of their PCs; they’re actually moving around, using tablet PCs and mobile devices such as smart phones or digital recorders. And once you move out of your office or room, you’re dealing with a lot of background noise and interruptions, which is not necessarily the optimal input of language. The systems need to be more robust and able to run on a variety of devices and operating systems, allowing the physician to roam through a hospital or dictate from home.

And by converting speech into structured data, decision support information can be made available to doctors at the point of care. So this means integrating speech into evidence-based reference material for better decision-support and reduced medical errors, or combining it with a clinically intelligent parsing engine to capture and use medical concepts. We want to be able to go beyond just capturing information.

DHP: What’s the case for using speech recognition in health care?
STANGLMAYR: According to the American Medical Transcription Association, up to $12 billion is spent each year to transcribe medical dictation into medical text. Health care professionals can no longer rely on handwritten notes; as health care becomes automated, they need information in digital form. With speech recognition, you can almost instantly record and distribute information, especially when it is integrated with health IT systems. Alegent Health Omaha in Nebraska, for example, has over 1,400 physicians and health professionals who dictate about 42 million lines a year. By adding a system-wide speech information management solution, they were able to see ROI through productivity gains, including a 17 percent increase in dictation volume. Speech recognition can also be used for commanding and controlling devices or applications, which allows a hands-free environment in specialties such as radiology, pathology, or cardiology. The technology also allows for templates and macro development, such as pre-defined text blocks and voice navigation.
DHP: What are some other areas of cost-savings with speech recognition?
You’ll find return on investment from six to 12 months, or even less, depending on how it’s used. It’s quite impressive. Organizations can speed up documentation creation processes, and reduce administrative costs, while keeping flexible workflow scenarios. By eliminating manual typing, productivity can be increased by more than 70 percent. Further cost reductions can be achieved by implementing a network-based system, allowing you to combine front-end and back-end systems, (front-end are systems where the text appears directly on your screen; back-end are behind the scenes, such as when dictation is converted into text in the background, and then sent to a transcriptionist for editing.)
DHP: What are the basics for implementing a speech recognition system?
You need to set the expectations properly and manage change. Make sure that you identify a product champion who ensures that it will fit into the designated environment. The system needs to be used as it’s designed, so try to dictate as much as possible, because if you only do dictate one or two dictations a day, the system is not able to learn enough. If you want to do a pilot, choose a small group of 5-10 users as a sampling rate, and don’t create some artificial way of using the system. If you use it the way it’s designed, it will provide the value you need. That doesn’t mean it doesn’t have errors once in a while.
DHP: What do you see happening to the speech recognition market in the future?
The penetration of speech recognition in the U.S. is still pretty low depending on the specialty; radiology is a little higher, for example. But typically it’s estimated to be between 10 and 20 percent. The uptake for speech recognition has been faster outside the U.S., such as in Europe.

As speech recognition becomes more integrated into systems, it will become part of the workflow, and an integral part of the way people document. Physicians want to have a digital desktop where they have all the tools they need to interact with the system, so if they’re creating a document, they want to be able to dictate, and if they’re looking for references or adding information to the report, they want to be able to drive that with their voice. So it’s really expanding what speech recognition does today.


Click here to log in.


Add Comment

Text Only 2000 character limit

Page 1 of 1

White Papers & Special Reports

Wiley Chem Planner Synthesis Solved
Sponsored by Wiley

SGI and Intel
HPDA for Personalized Medicine
Sponsored by SGI and Intel

Life Science Webcasts & Podcasts

medidata_podcast_Sites and Sponsors: Mending Bridges over Troubled Waters  
Sites and Sponsors: Mending Bridges over Troubled Waters
Sponsored by Medidata Solutions Worldwide

This podcast brings together two industry leaders to focus on the issues that divide sponsors and sites. On the one hand sites and sponsors unite in advancing better health care through a common passion for developing better drugs. Yet some issues divide them and bridges need to be built or mended to advance the highest levels of cooperation, coordination and success in clinical trials. Listen as the key issues are debated from the site and the sponsor side and new methods and technology are advanced that offer near-term and lasting solutions.

• Common ground in reaching agreement on a budget

• Improving the pace of agreement on budgets and contracts

• Processes for payment to sites on a timely basis

Listen Now  

More Podcasts

Job Openings

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.