YouTube Facebook LinkedIn Google+ Twitter Xinginstagram rss  

Scientists have demonstrated that DNA computers can solve complex problems, but the verdict is out on whether they will ever become practical.

By Salvatore Salamone

Oct. 9, 2002 | So much research in the life sciences today is focused on DNA, identifying the critical sequences within genes that govern health and disease. Increasingly, this analysis demands vast computational resources, typically achieved by dividing the problem into small computational tasks carried out by numerous discrete processors or nodes.

But could the subject of this intense biomedical study — DNA — also provide the computational power to advance the field?

Many life science companies use grid, peer-to-peer, and clustering technologies to harness more computational power. And some use supercomputers, which rely on the same principle: Namely, problems are solved by distributing computational tasks among many independent nodes within the larger computer.

But the best any of these approaches can achieve is to get thousands of processors — fast processors — dedicated to solving one problem. That's where a different type of computer technology, DNA computing, holds considerable promise — at least in principle.

University of Southern California professor Leonard Adleman, a mathematician who has also done extensive work in biology, began thinking about using DNA strands as a computer back in 1994. He was inspired while reading the seminal textbook Molecular Biology of the Gene, co-authored by Nobel laureate James D. Watson, who helped define the structure of DNA.

Adleman was particularly intrigued with the enzyme DNA polymerase, which replicates strands of DNA. "The polymerase works like a machine," Adleman said in a 1996 interview with the USC magazine Networker. "It comes along, it latches onto the DNA, and it slides down the strand. As it moves down, the polymerase creates a new strand."

The process reminded Adleman of early efforts to build computers. In the 1930s, Alan Turing conceived of a concept that became known as the Turing Machine: a device that would read a single bit of data off tape, perform some action on that data based on instructions stored in the machine, and then write the result in the spot of the original data.

"Turing had described a machine running along a tape of digital information, and now here was this polymerase, which runs down strands of DNA information," Adleman said in the interview. "In the middle of reading this, I thought, 'Wow! This is like a computer. This looks like it could compute.'"

A DNA computer, as the name implies, uses DNA strands to store information and taps the recombinative properties of DNA to perform operations (see "How a DNA Computer Works," right). A small test tube of DNA strands suspended in a solution could yield millions to billions of simultaneous interactions at speeds — in theory — faster than today's fastest supercomputers.

How a DNA Computer Works 
In 1994, the first working demonstration of a DNA computer set out to solve what is known as the traveling salesman problem.

Read More 

Indeed, the spectacular speed of DNA computers has been apparent since Adleman's original demonstration of a DNA computer in 1994. "The computation popped along at 1014 operations per second," Adleman said at the time.

That's a computational rate of 100 teraflops — 100 trillion floating-point operations per second. To put that into perspective, the world's most powerful supercomputer, NEC's Earth Simulator in operation at the Earth Simulation Center in Japan, was recently benchmarked as delivering a mere 36 teraflops of processing power.

DNA Computing Today 
Adleman and his colleagues at USC continue their efforts to advance the field of DNA computing beyond the realm of potential. Earlier this year they conducted an experiment designed to solve a problem with more than 1 million possible solutions, along with 20 variables that had to satisfy a set of complex criteria.

Nickolas Chelyapov, one of Adleman's colleagues, explains the problem using the analogy of a customer with a complicated list of criteria who goes car shopping at an auto dealership with 1 million cars. "The first criterion might be something like 'I want the car to be either a Cadillac, or red, or convertible,'" Chelyapov says. And next, "If it is a Cadillac, then it has to have four seats or a locking gas cap. Third, if it is a convertible, it should not be a Cadillac or it should have two seats." The customer lists 24 such sets of conditions.

To find a car that meets all the requirements in such a problem, the salesman would have to check the customer's entire list of demands against a list of each car in the lot. While Adleman's team didn't plan to go car shopping, the problem they set out to solve was identical in structure to this example — and it had exactly one solution.

A computer would solve the problem just like the salesman in the analogy would: by sequentially testing each car against all the criteria. In contrast, the DNA computer performs operations simultaneously — the equivalent to each car on the lot having an attendant who listened to the customer's demands and drove the car off the lot as soon as that car failed to meet one of the conditions.

In this latest demonstration of parallel DNA computing, it took about four days of lab work to extract the answer.

Moving Beyond Conception 
Extracting an answer from a DNA computer is a challenge that has been apparent since the birth of the DNA computing concept. In the classic Hamilton path problem that Adleman used to test DNA computing in 1994 (see "The Biological Bits and Bytes of DNA Computing," right), it took about a week of lab work to extract the answer. One snag: Most people using a pen and paper could have figured out the answer in less than 30 minutes.

The Biological Bits and Bytes of DNA Computing 
DNA computers take advantage of DNA's physical properties to store information and perform calculations.

Read More 

Naturally, harder problems would be more difficult to do by hand — even by traditional computers. Still, extracting the correct answer from a solution with DNA strands is seen as a major hurdle to the acceptance of DNA computers.

One attempt to move beyond manually searching for the correct answer in the lab was demonstrated two years ago. In a January 2000 paper in Nature entitled "DNA computing on a chip," Lloyd Smith and colleagues from the University of Wisconsin described a technique to expedite DNA computing. Called massively parallel elimination, the method uses strands of DNA immobilized on a chip that has a specially treated gold surface.

The group solved a 3-SAT hard nondeterministic polynomial (NP) time problem. A hard NP problem is one in which the time required for algorithms to find a solution increases exponentially with the number of variables involved. (In an easy NP problem, the algorithm running time increases in proportion to the number of variables.)

A hard NP problem can eat up a lot of computer cycles if carried out by brute force. For example, the Hamilton path problem —commonly known as the traveling salesman problem — is a hard NP problem. If there are N cities in a Hamilton path problem, there are N!/2 possible paths, where N! is N factorial, which is the multiplication of every integer from 1 to N — for example, 4!= 1 x 2 x 3 x 4.

As the number of cities grows, the number of possible path combinations soars. For example, if there are nine cities, there are 180,000 possible paths. Eleven cities would have 19.8 million paths, 13 cities would have about 3 billion paths, and 17 cities would have about 200 trillion paths. For larger and larger numbers of cities, brute force attempts to calculate all paths would quickly overwhelm even a supercomputer.

The experiment by Smith's group tried something different. The group tackled a type of NP problem in which the answer is found by determining a solution that simultaneously satisfies a number of conditions, called clauses, each of which has three variables that can be true or false.

Smith and co-workers tackled a problem similar to the one Adleman's DNA computer solved this year. The Wisconsin team's problem had 30 conditions and 50 variables. Such a 3-SAT problem requires 1.6 million steps to solve using a traditional computer algorithm; it took only 91 steps using their modified DNA computing approach.

DNA strands representing all possible solutions were fixed to a chip's surface, and then strands representing the conditions were tested against all possible solutions one clause at a time. After the first condition strands were applied, any strands on the chip's surface without a complement were removed. The process was repeated until all condition strands were tested against the chip. The remaining strands on the surface were the solutions. Hence their name for the process: the massively parallel elimination approach.

But even with such a significant reduction in steps, there remains skepticism that DNA computers will prove practical for large-scale problems.

Practical Limitations 
In the same issue of Nature that carried the Wisconsin team's findings, Mitsunori Ogihara and Animesh Ray, faculty members at the University of Rochester at the time, noted that while this massively parallel elimination approach outperformed traditional computers, there would be challenges to scale this approach to even larger problems.

The authors noted that there would be a high cost associated with producing the tailor-made DNA sequences needed for this approach (for example, the 50-variable problem required 1015 unique strands of DNA). Moreover, there might be difficulties designing enough unique DNA strands that did not interfere with each other.

Similar limitation concerns have been raised by others. Ehud Shapiro, head of a group researching DNA computers at the Weizmann Institute of Science in Israel, says that "Today, DNA computers are limited to processing DNA that is synthetically designed."

And then there's the issue of errors introduced by the inherent sloppiness of DNA chemistry — a point Adleman echoed after his latest effort.

Such practical obstacles are leading many in the field to look for more modest applications for the technology. "It's possible that we could use DNA computers to control chemical and biological systems in a way that's analogous to the way we use electronic computers to control electrical and mechanical systems," Adleman said.

Despite its promise, the long-term prospects for DNA computing remain uncertain. The notion of full-scale DNA computing systems replacing silicon-based computers anytime soon, if ever, is remote. There are still significant limitations to overcome. For the foreseeable future — and as its pioneer Adleman suggests — DNA computing will likely focus on small-scale applications rather than the building of full-blown computers.* 

For reprints and/or copyright permission, please contact Angela Parsons, 781.972.5467.