The ongoing decoding of the human genome is one of the most ambitious scientific projects in history. But, two scientists claim, it’s also overwhelming technology. Michael C. Schatz of Cold Spring Harbor Laboratory and Ben Langmead of Johns Hopkins just published an article in IEEE Spectrum warning of a DNA data deluge caused by a paucity of quality algorithms inside bioinformatics software. The pair allege that while the cost of DNA sequencers–scientific instruments used to reorder DNA for research–has gone down and the quality of the computational power in them has gone up, software hasn’t kept up.
“It’s a problem that threatens to hold back this revolutionary technology,” Langmead and Schatz write. “Computing, not sequencing, is now the slower and more costly aspect of genomics research.”
The pair call for the use of cloud servers and search engine expertise to create a workable database of genomics discoveries that allows quick and easy searching. While the National Institutes of Health maintain a massive public search archive, retrieving useful information from the archive is difficult in a way that only accessing a government-created database can be.