Tag Archives: bioinformatics fpga

Cracking the Genomics Code

This week I was in San Diego attending the International Plant and Animal Genome Conference. PAG is a conference that brings together academic and commercial researchers and product vendors, with a particular emphasis on agricultural applications. (One of the first signs I saw when entering the lobby was a sign directing attendees to a “Sheep and Cattle” workshop. I wondered who would be cleaning the hotel carpets.)

In an earlier post, Please pass the dot plots, I described how Greg Edvenson at Pico Computing used an FPGA cluster and C-to-FPGA methods to demonstrate acceleration of a DNA sequence comparison algorithm. The quick success of that project was reason enough for us to attend PAG and learn more about the computing problems in genomics. Where are acceleration solutions needed?

It’s clear there are problems aplenty to be solved. As one researcher said to us, “The amount of raw data being generated by DNA sequencers each month is outpacing Moore’s Law by a wide margin.” He went on to describe how his group routinely undocks and hand-carries their hard drives down the hall because the time required to move the generated sequencing data across their network is too long. Solutions are needed for accelerating data storage throughput, and for the actual computations to do such things as assemble whole genomes from the small chunks of scrambled DNA that currently emerge from sequencing machines.

Why all the data? The human genome is about 2.91 billion base pairs in length*, and it’s not the longest genome out there, not even close. We have more base pairs than a pufferfish (365 million base pairs) but far less than a lungfish (130 billion base pairs).

Evolution is a curious crucible.

Sequencing technologies have advanced quickly. Machines and software offered by Illumina, Life Technologies, Roche and others can generate enormous amounts of genetic data. The bottleneck at present is in assembling all that data – like a billion-part jigzaw puzzle thrown to the floor – into a meaningful, searchable DNA sequence. The methods of doing this assembly, using algorithms such as ABySS and Velvet, may require parallelizing the problem across many CPUs, and using large amounts of intermediate memory – potentially terabytes of it.

If you are a researcher trying to figure out, for example, how to increase crop yields in sub-Saharan Africa, then you might be very interested in knowing how to breed a more pest-resistant and productive variety of barley (5 billion base pairs) or wheat (over 16 billion base pairs).

And if you’re Dupont or Monsanto, you may want to actually create and patent such a grain to have a competitive advantage.

To figure out such things, you may want to perform sequence comparisons of other species that appear to have the characteristics you are interested, and find the relevant genetic variances. You won’t have a chance of doing this unless you can sequence many varieties and perform detailed analysis of what you see. This takes lots of computing time and bags of money.

And so the gemonics industry looks for faster solutions for cracking the codes of life. The solutions involve cluster and cloud computing, GPUs and FPGAs, and perhaps exotic hybrid computing platforms to come.


*A “base pair” is two complementary nucleotides in a DNA strand, connected by a hydrogen bond. There are four kinds of nucleotides that make up these base pairs: adenine, thymine, guanine and cytosine. In the human genome only a small fraction of these base pairs are actually representing genes. It seems our bodies are mostly “junk DNA“, perhaps proving that we are what we eat.

Leave a comment

Filed under Reconshmiguration