March 14, 2006 | Editor’s Note: We are excited to introduce a new guest column in Bio•IT World — “Science and the Web.” This quarterly column, authored by Teranode’s Eric Neumann and Matt Shanahan, will explore new technology and applications of the Semantic Web, blogging, ontologies, community development, etc. The BioTeam’s “Inside the Box” returns next month.
You’d be hard-pressed to carry out any research project today without using the Web’s linked nature. The Web satisfies two large needs of science: as a resource to large and diverse data sets and as the primary communication system for scientific publishing and searching research discoveries.
However, along with its increasing importance in R&D, its simplicity as linked pages based on HTML has also constrained its ability to more intelligently assist scientists in searching, sharing, and annotating data. Using HTML, data can certainly be pointed to via a URL, but its structure depends on externally defined formats. Even the use of XML doesn’t remedy this problem, as witnessed in the long process of defining document type definitions (DTDs): Without developing a parser for a predefined DTD or XML schema, no applications will be able to understand how you represent your data.
Counter to the nature of the Web is the practice of defining data in one monolithic structure. Where does the data about a given gene end, and where does the pathway it is involved in begin? At the splice variant form, the modified protein level, or the complex it’s part of? The goal to connect complex information is not being advanced by quibbling over the boundary positions between biological, chemical, and medical object. Must the parsers be updated each time there is a new innovation in the science? How should we “link in” new data, annotations, and external references?
Today, we are at the mercy of the weakest link — the “href” found in every pointer on a Web page. A person can follow the link and make a decision about its information and relevance, but this doesn’t hold for search engines and applications assisting scientists. Many have suggested adding semantics to the links in HTML or XML, but each solution put forward is nonstandard and relies on some amount of application hard-coding.
What’s more, the evolutionary nature of science results in changing views and representations: the notion of splice variant was not considered a few years ago when transcript data were first being defined. And if it has been an uphill climb to get to some semblance of standards around genomics, consider what we’ll be facing with biomarkers, haplotypes, pathways, and clinical observations!
RDF is a W3C specification that provides the missing link required to do for data what HTML did for pages. RDF is central to the Semantic Web and is about linking data. It allows people to treat each data element more like a linkable document, which can be linked to any other data element. RDF has an additional feature that the data elements and the link are semantically typed (e.g., rdf:type). RDF does this by requiring each and every data resource, as well as the link (or property), to be specified as universal resource identifiers (URIs). That means every data element is guaranteed to be unique from anywhere on the Web. This is similar to how Web-page URLs work, but now we’re referring to genes, diseases, pathways, tissues, expression patterns, etc. The resulting triple form of URIs — subject resource, property, and object resource — provides a open and flexible means for linking data.
RDF DATA INTEGRATION: Using RDF, all data
are represented as linkable nodes (disks)
defined by URIs.
Compare this to how anyone can make a link in text content on an HTML Web page, without being constrained by where to place the hyperlink and to what the hyperlink points. All Web applications and browsers are guaranteed to handle the <a href=”format.html”> link format</a> no matter where they occur in the content. The author remains in charge of defining the link. With RDF, if you define <your:coolGene>, and want to link in <my:coolFunctions>, you can easily add the link <your:coolGene> <has> <my:coolFunctions> without having to change schema or to store data. Furthermore, you can track these links overtime to keep track of who has added what.
Through RDF, data can take advantage of the Web paradigm, by using triples to say “<from here> <through this property> <to there>” rather than just “href=...”. Many are convinced that RDF provides the missing link, and that XML as a universal data standard is insufficient in many cases, especially if data representation needs to be flexible, evolving, and open. Why should anyone tell a researcher what she can or cannot associate with a protein object; it’s good to work with commonly accepted standards and use tools that limit unwanted mistakes, but what if I want to add another observation (i.e., property+object) such as <is a target for treating> <colon cancer> in a way that does not disturb the data for others? If it’s clinically relevant now, but I need to wait two years for a committee to recommend it, then a lot of lives are taking a back seat to brittle technologies. In RDF, you add your local namespace with its predicates and link it to content based on other accepted standards.
Many suggest that ontologies will be the key to integrate data properly. Although I see clear value in defining and using ontologies, there is nothing intrinsic to them that will make data integration any easier for scientists. Ontologies will require a lot of consensus building between community members (see BioPAX.org), and a common enterprise architecture that validates and connects data using ontologies is still lacking (though OWL makes this tractable). I for one see connecting existing data first through RDF’s triple model as a way to greatly facilitate the building of ontologies in a practical bottom-up approach.
Web services is another area that deserves mention, but we will save that for another time. For now, I will say that most Web services projects in life-science research would benefit from considering RDF as the primary messaging form and defining the semantics through RDF-schema or OWL. Semantics of content have more relevance to end-users than do the semantics of services.
On a final upbeat note, the Semantic Web for Healthcare and Life Sciences Interest Group (HCLSIG) had its first face-to-face meeting in January. The interests spanned by the 60 participants are quite broad, but we anticipate this coverage to be necessary to the future of integrated research and medicine. Ph.D.s and M.D.s alike had a chance to present their views, concerns, and hopes. It’s generally hoped that through such efforts, we will find ways to better utilize data being created and advance the effective knowledge of science and medicine.
Eric K. Neumann is senior director product strategy at Teranode. E-mail: email@example.com.