Also acutely relevant to the problem I just described is this article by Bruce Schneier, who explains how the problems of computer and software security are very similar to those in biological engineering.
Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happens again.
Even finished code still has problems. Again due to the complexity of modern software systems, “works properly” doesn’t mean that it’s perfectly correct. Modern software is full of bugs — thousands of software flaws — that occasionally affect performance or security. That’s why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.
Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn’t work in biology.
In nature, a similar type of trial and error is handled by “the survival of the fittest” and occurs slowly over many generations. But human-generated code from scratch doesn’t have that kind of correction mechanism. Inadvertent or intentional release of these newly coded “programs” may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.
We can’t release “gene patches” to correct errors introduced when tinkering with genomes! I can imagine that someday being an issue — by analogy, going in for dialysis is kind of like a routine software management problem. But no one likes having to do dialysis, it’s a symptom of an underlying problem that is just being patched superficially, not fixed, and modifying genomes can introduce new concerns. I wonder how often software updates create new problems that weren’t present in previous versions? 100%?
I don’t think we think enough about the potential for disaster in genetic engineering, because we are enthusiastic about the potential for great good. We need a balance. It would be helpful for those most optimistic about gene modification to have more consideration for the dangers by, for instance, talking to software security experts.
Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn’t make its way into the larger body of practitioners who have important contributions to make.
Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.
What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.
I’m optimistic about the future of genetic engineering, but I still cringe when I see some ‘bio-hacker’ inject themselves with some home-brewed cocktail of gene fragments that they think will improve their genome, but is more likely to do nothing or make them sick. I get the same feeling when I see someone stick a flash drive into the USB port of some random public terminal. I hope they’re going to practice good data hygiene and quarantine that widget before they put it in their work computer! (They probably won’t.)