(Recorded on April 16, 2014)
As research becomes less expensive through the commoditization of instruments, the biosciences are seeing data volume explode. With that, new challenges develop in data storage. Getting an early grip on managing that data and creating an environment that supports researcher access to information will make adaptation easier. This session will talk about the trends and will provide tips and recommendations for overcoming data deluge.
- The core problem influencing bio-IT storage and data management efforts
- Reason not to panic and a few reasons where a little panic may be necessary
- Must-haves and nice-to-haves when it comes to storage
- Where cloud storage may help
- How to separate capacity from performance in an science infrastructure
Chris is an infrastructure geek & co-founder of the BioTeam, http://www.bioteam.net, a specialist consulting firm occupying the niche between high performance computing, IT and discovery-oriented life science research. Chris has spent much of the last 15 years designing, building, fixing and improving research-focused HPC and IT systems for use in demanding production-computing environments. He occasionally is known to blog, tweet and speak about industry trendsand best practices. Dagdigian received his bachelor’s degree from Worcester Polytechnic Institute (WPI) in Biotechnology.
Director of Product Management and Marketing
Jeff Tabor leads all product definition, release planning, and product marketing efforts. Prior to Avere, Jeff was a Senior Manager at NetApp, where he managed the Data ONTAP GX and Data ONTAP Cluster Mode product lines. Before NetApp, he was the Product Manager at Spinnaker Networks, which developed a clustered NAS solution and was acquired by NetApp. Jeff holds S.B. and S.M. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.