Three Pharma Data Challenges and How to Overcome Them
By Gary Palgon
April 29, 2016 | Contributed Commentary | There’s no lack of ideas about what is needed to make advancements in life sciences research and improving patient care: real-time research, humanizing big data, enhancing customer values and motivation, and that’s just for starters. But for these and many other wants, it is recognized that "data is king” and, unfortunately, the inability to get access to the right data, at the right time and in the right format continues to prevent organizations from moving as quickly as they would like in order to develop and bring new products to market.
It was evident that these data struggles were on the minds of attendees at the recent Bio-IT World 2016 Conference in Boston. Here are three major obstacles that I heard discussed more than once, along with my thoughts on what needs to be done in order to advance.
Pharma Data Challenge #1: Access to EHR Data
More providers than ever before are using electronic health record (EHR) systems, and individuals are beginning to not only access their data, but also build their own personal health records (PHRs). This is key to obtaining real world evidence for research. The challenge, however, is the terrible trend of “data blocking.” Said another way, it’s the “tax” that EHR/EMR providers charge to allow the export of patient information from their applications in order to exchange it with other organizations. Though providers have already licensed the applications, vendors have put this additional barrier in place creating not a technical problem, but rather a business one. Pharmaceutical companies as well as providers and patients must encourage (or force) EHR/EMR vendors to make the patient data stored within these systems openly available. This idea gained steam beginning in January at the JP Morgan Healthcare Conference and then again in March at the Healthcare Information and Management Systems Society (HIMSS) Conference and, as a result, multiple vendors have begun openly supporting this initiative. Let’s encourage others to follow suit and make the vast amount of patient data available for research.
Pharma Data Challenge #2: Interoperability Remains a Problem
Year after year individuals hold hope that standards will solve their interoperability problems, but the truth is that “standard” is a one-word oxymoron. As pharmaceutical organizations scale up their clinical trial processes, running more clinical trials with more Contract Research Organizations (CROs) and more electronic data capture (EDC) applications, integrating and harmonizing information between these systems becomes more difficult. In order to overcome these interoperability problems, organizations should look to outsource this non-core competency and non-competitive differentiator to companies whose expertise fills this exact need. This then frees scientists from spending an estimated 80% of their time as “data janitors” and allows them to spend their precious time focusing on analyzing data instead. Data Platform as a Service (dPaaS) is one example of an outsourcing model that is gaining traction, where an integration center of excellence is executed as a managed service, serving up data when needed, where needed, and in the desired format.
Pharma Data Challenge #3: Applications Have Created Data Boundaries
Historically, when companies have a problem to solve, they look to an application to solve it. Applications often present “sexy” user interfaces that, with demo data, present really well. But over time, another problem arises: applications create boundaries around data. This is because applications are built on data models that answer specific questions at a specific point in time. In today's forward-looking world, however, we don’t really know the questions that researchers will want to ask in the future, so we have to change our methodology and create an environment where data persists in raw form and application logic can be applied at any time to answer today's pressing questions—and tomorrow's as well. You will hear this concept referred to in the big data world as “schema on read,” where the extract, transform and load (ETL) model moves to an extract, load and transform (ELT) model. With this approach, data remains fluid and unconstrained, only taking structure when research questions are asked of it.
Gary Palgon is VP Healthcare and Life Sciences Solutions at Liaison Technologies. He can be reached at GPalgon@liaison.com.