YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

Whitepapers & Special Reports

HPDA for Personalized Medicine
Translational and precision medicine is pushing data analysis requirements to new levels. This white paper explores the factors driving analytic requirements and the benefits of a multi-disciplinary approach of applying high performance data analytics to the massive amounts of data generated in next-generation sequencing so organizations can speed their time to discovery to help identify the causes of diseases and allow personalized treatments. 
Wiley Chem Planner Synthesis Solved
In this case study, Wiley ChemPlanner was applied to help a chemist identify alternative and shorter synthetic routes to target molecules. Options that were not known to the chemist and would likely not have been identified by searching the literature with traditional search tools were found. ChemPlanner thus helped the chemist to increase the efficiency of the synthesis development process by reducing the time and resources spent.

Whitepapers & Special Reports Archive
Pre-Empting Tomorrow’s Data Overload with Today’s Technology
Can scalable Network Attached Storage (NAS) provide a viable solution for your organization’s growing storage demands? This complimentary whitepaper from Enterprise Management Associates examines ways that small to large businesses can scale their storage needs to fit their budgets today, speed storage access, and prepare for the future as their demands for terabytes grows to petabytes.
The Fourth Paradigm of Science
In the era of Information, everything about science is changing. Experimental, theoretical and computational sciences are all being affected by this data deluge and a fourth, “data-intensive” science paradigm is emerging.  We call this fourth paradigm of science as Reverse Informatics, the process of getting back the “source Data” from the primary scientific literature. In this whitepaper we will discuss the application of the concept of Reverse Informatics in scientific research. Parthys Reverse Informatics is one of the leading Information Research organizations, which supplies solutions for all the aspects in Drug Discovery Informatics including Cancerinformatics, Neuroinformatics, Cheminformatics, Pharmacoinformatics and Translational Informatics. Reverse Informatics’ three services include Literature Curation, Patent Analytics and Thought/Opinion Leaders' research.
Cloud Computing that's Ready for Life Sciences
Over the past decade the embrace of cloud computing by life sciences and healthcare has been comparatively slow. Concerns around security, performance, and regulatory compliance persuaded many organizations the downside risk was greater than the upside potential. Moreover, workable alternatives - based largely on internal data centers and private networks - were available and well understood. Not surprisingly, the cautious life sciences and healthcare (LS&H) community resisted change. Today, that picture has vastly changed.
If You’re Moving to the Cloud, Who Are You Bringing with You?
Cloud, mobile, and social technologies are making it easier than ever for organizations and individuals to work productively. Anywhere, anytime access to applications and information inside and outside of the enterprise is becoming standard operating procedure.
De-Identification 101
Big data and big privacy can go together. Safely and securely share health data information with the right strategy: de-identification. Learn everything there is to know about the process in Privacy Analytics’ white paper. De-identification takes data-masking to a whole new place–ensuring quality, granular data while minimizing risk of data breach and re-identification. HIPAA compliant, de-identification goes even further to protect sensitive information while maintaining data utility for secondary purposes.
Using Cloud-Based Discovery Support
Quickly and easily discover valuable insights in regulatory intelligence across various disparate collections of unstructured content to support plans for new product development, to predict future performance, to advance scientific and manufacturing methods, and to improve the company’s quality and management.
Making the lab of the future today’s reality
Get to the heart of what concepts such as ‘the lab of the future’ and ‘the paperless lab’ really mean while exploring the future of R&D. Examine the main drivers transforming the way researchers work and the response of the R&D enterprise software sector to the challenges of change.
Accelerating the Pace of Discovery In the Age of Data Intensive Science
21st Century science and discovery is driven by Big Data and advanced collaboration. Genomic and biomedical researchers have long been challenged with finding effective ways of exchanging Big Data with distant colleagues. This whitepaper details how the 100G Internet2 Network and related solutions are solving challenges for the biomedical research and healthcare communities’ advanced technology and remote collaboration needs.
Looking Forward: The Case for Intelligent Design (and Infrastructure) in Life Science Biologics R&D
This white paper highlights key issues facing biologics discovery researchers and product developers today and the new capabilities being brought forth by advances in science and technology. A discussion of how Dassault Systemes’ new BIOVIA Biologics Solution helps address these issues is included along with anticipated potential barriers to adoption.
Data Virtualization: Agile Data Solutions for Life Sciences
This interactive eBook walks you through the technical and business efficiencies realized by Life Sciences companies using Data Virtualization. It includes real-life use cases that provide a glimpse on how Data Virtualization can make a winning difference to become agile, scalable and cost efficient, be it in Research & Discovery, Drug or Medical Devices product development or Customer Operations.
5 Ways to Manage the Rising Costs of Benefits for small and medium-sized businesses
A robust benefits package can be the difference between a talented superstar choosing to work with you as opposed to a larger, more establishedcompetitor. But with the cost of benefits rising each year, providing a benefits package that helps you attract, retain, and motivate talented peoplegets harder all the time.With knowledge and planning, you can create a win-win situation for yourself and your employees. You can use benefits to meet the needs of yourworkforce and successfully compete for top talent. As a business owner, it’s important to be aware of the ways you can manage costs and attract tophires.
How to Safeguard for PHI
Context is king when it comes to safeguarding Protected Health Information (PHI). As patients, we share many personal details with our care providers. Concerns over who has access to this information and how it may be used can cause us as much worry as our health issues. Effectively safeguarding PHI means knowing who will have access to the data, how it will be stored and what details it contains. In other words, its context for use.
Taneja Group: Data-Aware Storage Category Report
Imagine data storage that’s inherently smart; it knows what type of data it has, how it’s growing, who’s using it or abusing it. Welcome to the new era of data-aware storage; it could not have come at a better time. This new data storage category offers tremendous real-time intelligence without impacting performance or quality of service.
Turning Text into Insight: Text Mining in the Life Sciences
Given the volume of scientific literature and the pace at which it is published, it’s neither feasible nor cost-effective for researchers to read and analyze this material one article at a time. Life science researchers use text mining tools to analyze massive amounts of information quickly to extract data, assertions, and facts. Read this paper to learn about text mining and three approaches to maximize its efficiency and potential for discovery.
Considerations for Modernizing Scientific Compute Research Platforms
This white paper looks at the computing and storage needs in pharmaceutical research, disease research, and precision medicine environments and matches them to technologies modernizing their infrastructures.
Transforming Large Scale Genomics and Collaborative Research
As sequencing data explodes in volume, the ability to transform this data into biomedical insights will require a cost-effective, open standards-based computational infrastructure that is built for extreme scalability. Infrastructure and tools will need to be optimized accordingly to execute, store and process massively large genomic workflows.
The Rise of The Biologists: The Changing Face of The Bioinformatics Industry
As in-depth biological knowledge is increasingly a prerequisite for research success, a shortage of bioinformatic skills presents an exciting opportunity for research biologists. In this whitepaper we discuss the context, risk and Thomson Reuter’s first application made specifically for bench biologists.
Turning Data Into Insight: Cognitive Search & Powerful Analytics for Life Sciences
Finding relevant knowledge in the complex and diverse data of Biopharma companies requires cognitive systems using Natural Language Processing (NLP) capable of “understanding” what unstructured data from texts and videos is about. This whitepaper highlights how Cognitive Search and Analytics are key elements for driving innovation, improving the efficiency of research, clinical trials, and regulatory processes.
From Convergence Vision to Reality
Yet despite delivering improved trial efficiency, the proliferation of diverse tools – clinical trial management systems (CTMS), randomization and trial supply management (RTSM), and electronic data capture (EDC) to name just a few – has also produced a ‘technology chaos’ as users and vendors struggle to knit the new tools into comprehensive solutions. To a large extent this isn’t surprising. Technology adoption across most
OpenStack for HPC: Meeting Your Varying HPC Requirements with a Flexible Private Cloud
It is no secret that more and more organizations are moving to cloud-based architectures. Many are using OpenStack, the open source cloud computing platform for this transition. And increasingly, OpenStack ecosystems being considered and used to execute High-Performance Computing (HPC) workloads.
The insideHPC Guide to Genomics
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.  Read this white paper to learn more.
ProQinase: Syngeneic mouse models as a tool to study immune modulatory effects of cancer therapeutics
Every cancer treatment has the potential to induce a stimulatory or inhibitory effect on the immune response to a tumor. Scientific knowledge about the significance of the immune system for tumor eradication during conventional treatment is growing quickly, and an increasing number of immune-modulating drugs are entering clinical trials for cancer treatment. This necessitates investigating these drugs in the presence of an intact immune system, and syngeneic tumor models are the ideal tool to achieve this. In addition to our many xenograft mouse models, we established several syngeneic tumor models, which we thoroughly characterized with respect to immune phenotyping and response to immune checkpoint inhibition. Four of them are introduced in this white paper
IBM and Accelrys Deliver Turnkey NGS Solution
Fast and affordable NGS is fundamentally transforming the healthcare and life sciences industries. By improving time-to-market for preventive and personalized medicine, companies can save millions of dollars in drug discovery and development costs while delivering innovative therapies. Accelrys running on IBM systems provides an optimal environment for the rapid extraction of biological meaning from NGS data. And deployment is simple with the preintegrated IBM Application Ready Solution for Accelrys based on joint reference architecture developed with Accelrys.
An Infrastructure to Meet Today's Life Sciences Research Requirements
Due to the growing data volumes involved in life sciences research and the need for speedy analysis, traditional IT infrastructures – either monolithic symmetric multiprocessing systems or loosely integrated high performance computer (HPC) clusters -- do not fare well. The problem is that such infrastructures are hard to scale or struggle to deliver the needed performance, and as a result can be an obstacle to research progress and investigative success.
Enabling Data Transfer Management and Sharing In the Era of Genomic Medicine
As sequencing technologies continue to evolve and genomic data makes its way into clinical use and medical practice, a momentous challenge arises−how to cope with the rapidly increasing volume of complex data. Issues such as data storage, access, transfer, sharing, security, and analysis must be resolved to enable the new era of genomic medicine.
From Convergence Vision to Reality
A detailed discussion of the technology used in Perceptive MyTrials is beyond the reach of a short paper, but a substantive overview is instructive. MyTrials is SaaS delivered and based on a federated architecture that emphasizes standards (XML, SAML, BRIDG, etc.) where practical, openness for third-party integration, data virtualization techniques that minimize data movement and speed performance, and agile development techniques to accommodate rapid change.
SLIPSTREAM APPLIANCE: NGS EDITION - MIT's Ed DeLong Sifts Microbial Treasure Troves Using SlipStream
Deciphering how marine microbial communities in­uence the world’s energy and carbon budgets is the province of Ed DeLong1 , prominent metagenomics researcher at MIT and member of the National Academy of Sciences. Few scientists match DeLong’s animated eloquence when discussing the quest to understand lowly microbial “bugs” – a pursuit that today depends heavily on next generation sequencing (NGS), powerful computational tools, and submersible robots able to roam the sea.
Bridging the gap between compliance and innovation
Success in medical device manufacturing requires continual innovation in order to deliver improvements in the quality of patient care. This in turn drives business revenue and profits. At the same time, device manufacturers need to comply with the extensive quality systems regulations as issued by the Food and Drug Administration (FDA) and other regulatory bodies and standards organizations.
Comply or Perish: Maintaining 21 CFR Part 11 Compliance
The biggest challenges of Life Sciences companies today are maintaining a robust product pipeline and reducing time to market while complying with an increasing and evolving multitude of Federal and international regulations. In this paper, we discuss the particular requirements of rule 21 CFR Part 11 and describe how OpenText Regulated Documents built on OpenText Content Server – the leading collaborative knowledge management software from OpenText, enables Life Sciences companies to comply with 21 CFR Part 11.
Enterprise Informatics: Key to Precision Medicine, Scientific Breakthroughs, and Competitive Advantage
Given their level of investment in data and data management systems, healthcare delivery and life sciences organizations should be deriving considerable value from their data. Yet most organizations have little to show for their effort; the capabilities of their systems are highly compromised, and the practice of precise, evidence-based medicine remains elusive. The fact that these institutions have spent many years collecting data and building infrastructure for so little return has, for many, become “the elephant in the room”—a painfully obvious and uncomfortable topic of conversation.
OpGen's Whole Genome Mapping Tackling Sequencing's Unfinished Business
Important projects once deemed impractical are now within reasonable reach and modest sequencing studies are done in a few weeks. Consider the ambitious 1000 Genomes Project1 launched in January 2008 to develop a comprehensive resource on human genetic variation. In November 2012, the project successfully completed its first phase – publication of variation from 1092 human genomes – a remarkable feat.
LIFE SCIENCES AT RENCI - Big Data IT to manage, decipher, and inform
This white paper explains how the Rennisance Computing Institute (RENCI) of the University of North Carolina uses EMC Isilon scale-out NAS storage, Intel processer and system technology, and iRODS-based data management to tackle Big Data processing, Hadoop-based analytics, security and privacy challenges in research and clinical genomics.
Hadoop's Rise in Life Sciences
By now the ‘Big Data’ challenge is familiar to the entire life sciences community. Modern high-throughput experimental technologies generate vast data sets that can only be tackled with high performance computing (HPC). Genomics, of course, is the leading example. At the end of 2011, global annual sequencing capacity was estimated at 13 quadrillion bases and growing rapidly1 . It’s worth noting a single base pair typically represents about 100 bytes of data (raw, analyzed, and interpreted).
Challenges in Next-Generation Sequencing
The goal of Next Generation Sequencing (NGS) is to create large, biologically meaningful contiguous regions of the DNA sequence—the building blocks of the genome—from billions of short fragment data pieces. Whole genome “shotgun sequencing” is the best approach based on costs per run, compute resources, and clinical significance. Shotgun sequencing is the random sampling of read sequences from NGS instruments with optimal coverage. NGS coverage is defined as: Number of reads x (Read Length/Length of Genome). The number of reads is usually in the millions with the read length and length of genome quoted in base pairs. The length of the human genome is about 3 billion base pairs.
Heterogeneous Computing in the Cloud: Democratizing Compute Resources for Life Sciences
The combination of heterogeneous computing and cloud computing is emerging as a powerful new paradigm to meet the requirements for high-performance computing (HPC) and data throughput throughout the life sciences (LS) and healthcare value chains. Of course, neither cloud computing nor the use of innovative computing architectures is new, but the rise of big data as a defining feature of modern life sciences and the proliferation of vastly differing applications to mine the data have dramatically changed the landscape of LS computing requirements.
Reap the Benefits of the Evolving HPC Cloud
Harnessing the necessary high performance compute power to drive modern biomedical research is a formidable and familiar challenge throughout the life sciences. Modern research-enabling technologies – Next Generation Sequencing (NGS), for example – generate huge datasets that must be processed. Key applications such as genome assembly, genome annotation and molecular modeling can be data-intensive, compute intensive, or both. Underlying high performance computing (HPC) infrastructures must evolve rapidly to keep pace with innovation. And not least, cost pressures constrain both large and small organizations alike.
High Performance and High Throughput
High-throughput genome sequencing, or nextgeneration genome sequencing (NGS), is being driven by the high demand for low-cost sequencing. NGS parallelizes the sequencing process, producing thousands or millions of sequences at once [1,2]. The latest NGS sequencers from 454 Sequencing [3], Solexa (Illumina) [4] and Applied BioSystems (SoLiD) [5] now routinely produce terabytes (TB) of data. For example, the SoLiD 5500xl produces in one run (~7days) over 4TB of data. With additional overheads of reference genome storage/access, and type of analysis to be done, there is a requirement for cost effective, high performance and high throughput clusters and storage to handle these tasks. The ultimate goal is to bring down the cost of genome sequencing to within $1K with a turn-around time of one week, enabling personalized genomics medicine to become commonplace. Currently, times vary from one week to four weeks depending on the cluster infrastructure, and costs are still high. Figure 1 below shows the current associated cost structure for a human-sized gen
Optimizing Early Phase Oncology Clinical Trials
Oncology products continue to dominate the global therapeutics market. With anticipated continued strength, this therapeutic area will reach approximately $75 billion in global spending by 2015. Further, anticancer drugs continue to be the leading research therapeutic, with 672 oncology drugs in development.
Translational Research 2.0 by Chris Asakiewicz PhD
The World Wide Web has revolutionized how researchers from various disciplines collaborate throughout the world. In the Life Sciences, interdisciplinary approaches are becoming increasingly powerful as a driver of both integration and discovery. Data access, data quality, identity, and provenance are all critical ingredients to facilitate and accelerate these collaborative enterprises, and it is in the area of Translational Research where Web 2.0 technologies promise to have a profound impact—enabling reproducibility, aiding in discovery, and accelerating and transforming medical and healthcare research across the healthcare ecosystem. However, integration and discovery require a consistent foundation upon which to operate. A foundation capable of addressing some of the critical issues associated with how research is conducted within the ecosystem today and how it should be conducted for the future.
Turning Genomics Data into Practical Insight
Given the prodigious output of Next Generation Sequencing (NGS) instruments, high performance computing (HPC) has become the only practical way to sift through the data to discover useful insight, a point made clearly in a recent Nature Perspective, “The major bottleneck in genome sequencing is no longer data generation — the computational challenges around data analysis, display and integration are now rate limiting … Adequate computational infrastructure … including sufficient storage and processing capacity to accommodate and analyze large, complex data [is needed].”
BIG DATA: Managing Explosive Growth The Importance of Tiered Storage
The reasons organizations are collecting and storing more data than ever before is because their businesses depend on it. The trend toward leveraging Big Data for competitive advantage and to help organizations achieve their goals means new and different types of information—website comments, pharmaceutical trial data, seismic exploration results, to name just a few—is now being collected and sifted through for insight and answers.
The Swiss Institute of Bioinformatics Reduces Cost of Multi-Petabyte Storage by 50% with Quantum StorNext Software
When The SIB Swiss Institute of Bioinformatics was faced with spiralling data growth arising from next generation sequencing, it deployed a hierarchical storage management (HSM) solution centered on Quantum StorNext data management software and HP hardware. This provided high performance file sharing and data protection and reduced SIB’s total cost of storage by 50%.


For more information contact Angela Parsons or 781-972-5467   

 


 

 

 

 

 

 

 



For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.