Storing the Data Deluge


By Cindy Atoji

July 29, 2008 | Digital HealthCare & Productivity | If two trillion filing cabinets—or one billion terabytes—sounds like too much data to deal with, brace yourself. According to industry experts, health care data is increasing at such a rapid rate that by 2010, medical centers will need to be equipped to hold this massive volume of information. This exponential growth of data is straining storage and long-term archiving resources, says IBM’s Dr. Richard Bakalar, chief medical officer.

With PACS (Picture Archiving and Communication Systems) creating isolated islands of infrastructure, Bakalar says that the new-generation grid-based archival technology instead supports the interconnection of medical information, allow for easier data access, and the ability to intelligently manage the data lifecycle. “As health care organizations move from department to enterprise-focused infrastructures, interoperability and compliance issues are presenting unique storage management challenges,” says Bakalar, who spoke to Digital HealthCare & Productivity about implementing cost-effective data storage solutions.

DHP: How can a storage management grid make it easier and faster to retrieve and store documents and images?
Bakalar:
Typically when you purchase a PACS or any application, there is an attached temporary storage and long-term archive. And what hospitals have traditionally done is created separate archives for each application within the enterprise, creating a collection of archives that are not speaking to each other. This adds complexity and loss of reliability because of the variability of how these are maintained.

A collaborative grid architecture is an enterprise storage solution that uncouples archives from a PACS or application and gives the ability to centrally manage and store data through a middleware layer, a software layer that is introduced between the application and the storage media and actually automates the management of information. Data is assigned a digital signature, and then put into different storage pools, either high-performance spinning discs, lower performing theta discs, tape, or in the future, even flash copies or integrated hard drives.

Finally many people are very familiar with scalability in the upwards or downwards motion, but what is not oftentimes apparent is the ability to scale across medical specialties such as cardiology, pathology, and other content that isn’t typically considered for PACS storage. About 90 percent of medical data is fixed content by volume, and these can be put into a grid because they use generalized industry standards beyond just the health care standards. That’s an important consideration for enterprises: to be able to store all their fixed content—such as email, scanned documents, digital pathology files, streamed video in operating rooms—in a systematic way. So it really provides a true enterprise approach to storing reference content, as opposed to a more modality-centric model that we have today.

DHP: What are some issues as organizations make the transition to these grid systems?
Bakalar:
First and foremost, we find—and this was confirmed by many of the projects we did in Canada, such as the Canadian Health Infoway initiative—that the most challenging issue has to do with the governance and establishing policies of access, service level agreements, retention and compression policies, and decisions about how cost-savings would occur.

Because once you shift from an application-based or department-based system to an enterprise system, the governance becomes a very important component to the management, execution, and reliability of the system. Business objectives have to be created to ensure that there is a distinction between those who just want to store data collectively versus those who actually want to share information between departments or institutions. And the requirements for those two different business objectives are quite different. So this governance piece is a very important part to deciding retention and compression policies, such as how much data is going to be stored, what format, and its ability to be restored through compression and decompression algorithms.

Beyond the governance, the next level of concern is technical concerns and using industry standards for interoperability so that you’re not locked into a single proprietary solution. And so the ability to leverage existing standards, such as DICOM and HL7, are very important parameters that need to be considered.

There also needs to be consideration for data migration and for hardware and application refresh. Data can outlast the hardware significantly. In fact, the life of the data now is a minimum of seven years, but often as long as the lifespan of the patient. Therefore there will be a lot of transitions of the data from hardware to hardware over time. The ability to refresh that hardware through automatic data migration is very important, and so again, the data management system should be able to do that. This is a cost and time consideration that has to be designed into the system as you plan for it.

And as you also might improve the actual PAC applications over time as there are improvements in one version to another, there may be a need to refresh the aggregate database as well, and that can be done through a second layer, an image management layer, which virtualizes the database. So those are two separate layers and two separate objectives that have to be met from a technical perspective.

DHP: What is the ROI of changing to a storage grid?
Bakalar:
There are two major costs associated with the traditional versus the storage-grid environment, one of which is the migration of data. The cost to migrate data is in the order of $10,000 to $15,000 per terabyte when using the traditional services that are available for either migration of hardware or migration of PAC systems. So that becomes a very costly expense over time, and typically that migration can occur every three to four years, depending on the age of the hardware and the applications.

The other cost saving is that in the traditional model, applications are tied to single storage archive. You under-utilize the storage resources because you have to build in extra capacities for surges in needs of those applications, whereas in a storage grid environment, you can fully utilize those storage media and therefore not pre-buy more expensive storage today for what you might need a few years from now. Instead you purchase just-in-time storage, and utilize it at the 80-90 percent level as opposed to the traditional approach, which is typically the 30-40 percent utilization of storage because you have to have the excess capacity of each application. So you’re able to get economies of scale.

And thirdly, in the IBM Grid Medical Archiving Solution (GMAS), we offer tiered pricing as well, so as you get to a higher tier of storage, the incremental cost for additional terabyte is less.

So there are several levels of return on investment, and often times the return on investment can be 12, 14, or 18 months, depending on how many of these issues are applicable in your particular business environment.

DHP: Some organizations have begun outsourcing their data storage needs in order to save money. What are the pros and cons of outsourcing?
Bakalar:
External storage service providers may be able to meet sudden increased demand for capacity, and offer consistent high performance delivery, especially when they have invested in technology such as GMAS. Outsourced vendors are also often able to control operating costs, and can be better equipped for disaster recovery. Finally, the size of internal IT staffing can be reduced when long-term archive responsibility is outsourced.

But the cons are competitive business risk when data is shared outside the organizational firewall as well as medical liability risk if HIPAA compliance is not maintained. In addition, there can be possible performance concerns with the quality of the wide-area network, and escalating operating costs when the cost savings are not passed onto the customer.

 

 

Click here to log in.

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1

White Papers & Special Reports

Wiley
Wiley Chem Planner Synthesis Solved
Sponsored by Wiley


SGI and Intel
HPDA for Personalized Medicine
Sponsored by SGI and Intel


Intel
Transforming Large Scale Genomics and Collaborative Research
Sponsored by Intel


Life Science Webcasts & Podcasts

medidata_podcast_Sites and Sponsors: Mending Bridges over Troubled Waters  
Sites and Sponsors: Mending Bridges over Troubled Waters
Sponsored by Medidata Solutions Worldwide

This podcast brings together two industry leaders to focus on the issues that divide sponsors and sites. On the one hand sites and sponsors unite in advancing better health care through a common passion for developing better drugs. Yet some issues divide them and bridges need to be built or mended to advance the highest levels of cooperation, coordination and success in clinical trials. Listen as the key issues are debated from the site and the sponsor side and new methods and technology are advanced that offer near-term and lasting solutions.

• Common ground in reaching agreement on a budget

• Improving the pace of agreement on budgets and contracts

• Processes for payment to sites on a timely basis

Listen Now  

More Podcasts

Job Openings

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.