Anti-Caturday post »« [Thunderdome]

We need a sociologist of science…or a philosopher

There’s another paper out debunking the ENCODE consortium’s absurd interpretation of their data. ENCODE, you may recall, published a rather controversial paper in which they claimed to have found that 80% of the human genome was ‘functional’ — for an extraordinarily loose definition of function — and further revealed that several of the project leaders were working with the peculiar assumption that 100% must be functional. It was a godawful mess, and compromised the value of a huge investment in big science.

Now W. Ford Doolittle has joined the ranks of many scientists who immediately leapt into the argument. He has published “Is junk DNA bunk? A critique of ENCODE” in PNAS.

Do data from the Encyclopedia Of DNA Elements (ENCODE) project render the notion of junk DNA obsolete? Here, I review older arguments for junk grounded in the C-value paradox and propose a thought experiment to challenge ENCODE’s ontology. Specifically, what would we expect for the number of functional elements (as ENCODE defines them) in genomes much larger than our own genome? If the number were to stay more or less constant, it would seem sensible to consider the rest of the DNA of larger genomes to be junk or, at least, assign it a different sort of role (structural rather than informational). If, however, the number of functional elements were to rise significantly with C-value then, (i) organisms with genomes larger than our genome are more complex phenotypically than we are, (ii) ENCODE’s definition of functional element identifies many sites that would not be considered functional or phenotype-determining by standard uses in biology, or (iii) the same phenotypic functions are often determined in a more diffuse fashion in larger-genomed organisms. Good cases can be made for propositions ii and iii. A larger theoretical framework, embracing informational and structural roles for DNA, neutral as well as adaptive causes of complexity, and selection as a multilevel phenomenon, is needed.

In the paper, he makes an argument similar to one T. Ryan Gregory has made many times before. There are organisms that have much larger genomes than humans; lungfish, for example, have 130 billion base pairs, compared to the 3 billion humans have. If the ENCODE consortium had studied lungfish instead, would they still be arguing that the organism had function for 104 billion bases (80% of 130 billion)? Or would they be suggesting that yes, lungfish were full of junk DNA?

If they claim that lungfish that lungfish have 44 times as much functional sequence as we do, well, what is it doing? Does that imply that lungfish are far more phenotypically complex than we are? And if they grant that junk DNA exists in great abundance in some species, just not in ours, does that imply that we’re somehow sitting in the perfect sweet spot of genetic optimality? If that’s the case, what about species like fugu, that have genomes one eighth the size of ours?

It’s really a devastating argument, but then, all of the arguments against ENCODE’s interpretations have been solid and knock the whole thing out of the park. It’s been solidly demonstrated that the conclusions of the ENCODE program were shit.

yalejunk

So why, Yale, why? The Winter edition of the Yale Medicine magazine features as a cover article Junk No More, an awful piece of PR fluff that announces in the first line “R.I.P., junk DNA” and goes on to tout the same nonsense that every paper published since the ENCODE announcement has refuted.

The consortium found biological activity in 80 percent of the genome and identified about 4 million sites that play a role in regulating genes. Some noncoding sections, as had long been known, regulate genes. Some noncoding regions bind regulatory proteins, while others code for strands of RNA that regulate gene expression. Yale scientists, who played a key role in this project, also found “fossils,” genes that date to our nonhuman ancestors and may still have a function. Mark B. Gerstein, Ph.D., the Albert L. Williams Professor of Biomedical Informatics and professor of molecular biophysics and biochemistry, and computer science, led a team that unraveled the network of connections between coding and noncoding sections of the genome.

Arguably the project’s greatest achievement is the repository of new information that will give scientists a stronger grasp of human biology and disease, and pave the way for novel medical treatments. Once verified for accuracy, the data sets generated by the project are posted on the Internet, available to anyone. Even before the project’s September announcement, more than 150 scientists not connected to ENCODE had used its data in their research.

“We’ve come a long way,” said Ewan Birney, Ph.D., of the European Bioinformatics Institute (EBI) in the United Kingdom, lead analysis coordinator for ENCODE. “By carefully piecing together a simply staggering variety of data, we’ve shown that the human genome is simply alive with switches, turning our genes on and off and controlling when and where proteins are produced. ENCODE has taken our knowledge of the genome to the next level, and all of that knowledge is being shared openly.”

Oh, Christ. Not only is it claiming that the 80% figure is for biological activity (it isn’t), but it trots out the usual university press relations crap about how the study is all about medicine. It wasn’t and isn’t. It’s just that dumbasses can only think of one way to explain biological research to the public, and that is to suggest that it will cure cancer.

As for Birney’s remarks, they are offensively ignorant. No, the ENCODE research did not show that the human genome is actively regulated. We’ve known that for fifty years.

That’s not the only ahistorical part of the article. They also claim that the idea of junk DNA has been discredited for years.

Some early press coverage credited ENCODE with discovering that so-called junk DNA has a function, but that was old news. The term had been floating around since the 1990s and suggested that the bulk of noncoding DNA serves no purpose; however, articles in scholarly journals had reported for decades that DNA in these “junk” regions does play a regulatory role. In a 2007 issue of Genome Research, Gerstein had suggested that the ENCODE project might prompt a new definition of what a gene is, based on “the discrepancy between our previous protein-centric view of the gene and one that is revealed by the extensive transcriptional activity of the genome.” Researchers had known for some time that the noncoding regions are alive with activity. ENCODE demonstrated just how much action there is and defined what is happening in 80 percent of the genome. That is not to say that 80 percent was found to have a regulatory function, only that some biochemical activity is going on. The space between genes was also found to contain sites where DNA transcription into RNA begins and areas that encode RNA transcripts that might have regulatory roles even though they are not translated into proteins.

I swear, I’m reading this article and finding it indistinguishable from the kind of bad science I’d see from ICR or Answers in Genesis.

I have to mention one other revelation from the article. There has been a tendency to throw a lot of the blame for the inane 80% number on Ewan Birney alone…he threw in that interpretation in the lead paper, but it wasn’t endorsed by every participant in the project. But look at this:

The day in September that the news embargo on the ENCODE project’s findings was lifted, Gerstein saw an article about the project in The New York Times on his smartphone. There was a problem. A graphic hadn’t been reproduced accurately. “I was just so panicked,” he recalled. “I was literally walking around Sterling Hall of Medicine between meetings talking with The Times on the phone.” He finally reached a graphics editor who fixed it.

So Gerstein was so concerned about accuracy that he panicked over an article in the popular press, but had no problem with the big claim in the Birney paper, the one that would utterly undermine confidence in the whole body of work, did not perturb him? And now months later, he’s collaborating with the Yale PR department on a puff piece that blithely sails past all the objections people have raised? Remarkable.

This is what boggles my mind, and why I hope some sociologist of science is studying this whole process right now. It’s a revealing peek at the politics and culture of science. We have a body of very well funded, high ranking scientists working at prestigious institutions who are actively and obviously fitting the data to a set of unworkable theoretical presuppositions, and completely ignoring the rebuttals that are appearing at a rapid clip. The idea that the entirety of the genome is both functional and adaptive is untenable and unsupportable; we instead have hundreds of scientists who have been bamboozled into treating noise as evidence of function. It’s looking like N rays or polywater on a large and extremely richly budgeted level. And it’s going on right now.

If we can’t have a sociologist making an academic study of it all, can we at least have a science journalist writing a book about it? This stuff is fascinating.

I have my own explanation for what is going on. What I think we’re seeing is an emerging clash between scientists and technicians. I’ve seen a lot of biomedical grad students going through training in pushing buttons and running gels and sucking numerical data out of machines, and we’ve got the tools to generate so much data right now that we need people who can manage that. But it’s not science. It’s technology. There’s a difference.

A scientist has to be able to think about the data they’re generating, put it into a larger context, and ask the kinds of questions that probe deeper than a superficial analysis can deliver. A scientist has to be more broadly trained than the person who runs the gadgetry.

This might get me burned at the stake worse than sneering at ENCODE, but a good scientist has to be…a philosopher. They may not have formal training in philosophy, but the good ones have to be at least roughly intuitive natural philosophers (ooh, I’ve heard that phrase somewhere before). If I were designing a biology curriculum today, I’d want to make at least some basic introduction to the philosophy of science an essential and early part of the training.

I know, I’m going against the grain — there have been a lot of big name scientists who openly dismiss philosophy. Richard Feynman, for instance, said “Philosophy of science is about as useful to scientists as ornithology is to birds.” But Feynman was wrong, and ironically so. Reading Feynman is actually like reading philosophy — a strange kind of philosophy that squirms and wiggles trying to avoid the hated label, but it’s still philosophy.

I think the conflict arises because, like everything, 90% of philosophy is garbage, and scientists don’t want to be associated with a lot of the masturbatory nonsense some philosophers pump out. But let’s not lose sight of the fact that some science, like ENCODE, is nonsense, too — and the quantity of garbage is only going to rise if we don’t pay attention to understanding as much as we do accumulating data. We need the input of philosophy.

Comments

  1. Reginald Selkirk says

    If they claim that lungfish that lungfish have 44 times as much…

    A duplication error! But is it functional?

  2. kevinalexander says

    I am not a biologist but even with my rudimentary understanding it only took a minute to realize that junk DNA is what you would expect. In order for there not to be a lot of junk there would have to be some way for the machinery to recognize transcription errors so that they could be fixed or replaced. How could the machinery tell? What has it got to compare the string to?

  3. says

    This is interesting. I’ve thought for a while that we’ve been generating a lot of confusion by using one word — “scientist” — to actually describe two things. We can say scientist like, “this person’s job is to practice science”; specifically, making chemicals at the chemical company or designing jet engines or something. But we can also say scientist like, “this person has a philosophical commitment to the scientific method.”

    It’s my opinion that the the Method is the actual important part of science, and that if you were genuinely practicing science, as opposed to “having the job of science”, then you’d be stuck recognizing that the answers that the scientific method provides are actually incidental.

    (I guess that’s a weird thing to say, isn’t it? But it’s true; the scientific method doesn’t care whether or not there’s such a thing as junk DNA, it just cares that you’ve applied the correct analysis to it. “Science” as a field of philosophical endeavour is indifferent to your conclusions, an experiment that disproves your conclusions is as successful and worthwhile as one that proved them. But “science” as a field of professional endeavour is NOT indifferent to conclusions, if you spend ten years and thirty million dollars to prove something, you’d better bloody well prove it by the end.)

  4. Nerd of Redhead, Dances OM Trolls says

    Philosophy? Only if those doing it are required to use the Rod of Reality Check to make sure they are grounded in the facts.

  5. Antiochus Epiphanes says

    1. Shut up Nerd.
    2. As often as I’ve backed the importance of philosophy to the practical conduct of science, in this instance I have to disagree with PZ. What we need is an empirical approach, and we’ve already gathered and analyze the data. Every time we are unable to reject the drift-mutation model as an explanation for the distribution of alleles in populations, we falsify the hypothesis that the loci under study have any functional significance at all.

    Done and done.

  6. Antiochus Epiphanes says

    And I realize that our empirical approach (science!) derives from sound philosophical principles. But for the most part we already have worked out enough of this to arrive at a reasonable inference in regard to ENCODE. so no new philosophy needed. And really, no new data needed either.

  7. sharkjack says

    @2 kevinalexander

    there would have to be some way for the machinery to recognize transcription errors so that they could be fixed or replaced. How could the machinery tell? What has it got to compare the string to?

    There’s actually multiple things to compare it to, first of which is that nucleotides only really fit with one other nucleotide properly. This means incorrect pairings can be recogcnised and sections of DNA containing them can be removed. DNA is double stranded after all and only one side is added onto at any point in time.

    There’s also Homologous Recombination, which allows for additional mistakes to be repaired by first having other enzymes recocgnise the mistake, which leads to removal of basepairs there being removed, and then having the single strand invade the other copy of the same chromosome and then getting the strand elongated there.

    And there’s your answer, there are two strings to compare to, the complemantery strand and the other copy of the same chromosome. It’s not a perfect mechanism, but it does take out lots of errors.

    My problem with trying to say DNA is functional is that it tries so hard to box DNA segments into seperate and distinct functions. Like this bit here does this and only this. That part there does that and only that. When viewing biology in such a way, it instincively makes sense that ‘Junk DNA’ has to have some function, otherwise why have it there in the first place? Bacteria can have super fast reproduction cycles, so copying that DNA every time adds up when you think of it in the exponential terms. Yet bacteria have ‘junk DNA’ too. DNA is lost all the time in reproduction, so it’s not like you’d have to have new enzymes to specifically remove stuff. Yet the DNA remains. What people often forget is that DNA isn’t just code, it’s physically and chemically present. The model of DNA as code with functions is an incredibly useful shortcut when dealing with specific genes and specific mutations, but it’s not a good way to understand the genome as a whole.

  8. says

    We already have buckets of data from ENCODE. They come to the wrong conclusion despite taking a solid empirical approach.

    That’s my point: that next step of inferring an explanation from the data is logic and philosophy. That’s where ENCODE failed, in not recognizing the importance of that step, doing a synthesis with what we already know.

    I’m not asking for a new philosophy. The good ol’ standard kind is just fine.

  9. says

    In order for there not to be a lot of junk there would have to be some way for the machinery to recognize transcription errors so that they could be fixed or replaced.

    Exactly. And they assume they’ve got the ultimate, complete mechanism: it’s called “natural selection”.

    The ENCODE consortium leaders are panselectionists, through and through. If it’s there, it must have a function that has been selected for, therefore they interpret noisy transcription as signal.

  10. Maureen Brian says

    You mean you don’t have philosophers of science? We have armies of them. I was taught by a couple of good ones – Donald Cardwell, now dead, obituary at – http://www.independent.co.uk/news/obituaries/obituary-professor-donald-cardwell-1157385.html – and Jerry Ravetz, born in the US, a victim of McCarthy who fortunately escaped over here to teach me in 1960-61 and to write books which I don’t fully understand.

    You have some catching up to do, PZ!

  11. Antiochus Epiphanes says

    But we had buckets of data before ENCODE. And we knew with almost certainty that their claim is false. I mean shit. Since the conception of the coalescent, like thirty years ago, their claim has been rejected again and again, almost as a matter of routine.

    In this instance we need less philosophy, and a short, sober look at what has already been demonstrated.

  12. says

    The term “philosophy” encompasses a lot of different kinds of cogitation. Much of mathematics, and all of logic, is usually deemed to be within the domain of philosophy. PZ isn’t asking for a metaphysician or an ethicist, he’s asking for scientists to be competent logicians and epistemologists. In other words, understand how we make valid inferences and build up our picture of the universe. In a nutshell, it’s Bayes’s world, we just live in it, is my two cents.

  13. robotczar says

    We definitely don’t need a philosophy of science. Feynman is not alone, Dawkins, for one, is against the idea. Philosophy is based in the humanities perspective of knowing, not the scientific (which knows via empiricism). Philosophers are basically debaters, like all in the humanities, without a basis to test to see which assertion is “correct”. If you think about it, philosophy has never given us anything of practical value or even provided an answer to a question. So, it cannot provide an answer to how to do better science. If anything is needed to create better scientists, a course in logic or scientific reasoning might be. But, I doubt if that will increase the overall quality of science. Quit looking for educational answers to all of our problems.

  14. Antiochus Epiphanes says

    So, it cannot provide an answer for how to do better science.

    You’re out of your element, Donny.

  15. golkarian says

    I agree completely about philosophy, and if you think about it special relativity was a way of explaining a contradiction in science (the principle of relativity contradicted the constant speed of light), and therefore initially philosophy. But eventually it could be tested with science.

    But philosophy and philosophers are different, for the sake of both science and philosophy I wish it wasn’t so. It seems like a lot of people educated in philosophy try it because they feel uncomfortable with the conclusions of science and want to be “above” it.

  16. martinhafner says

    John Mattick is unhappy that ENCODE didn’t go even further:

    The ENCODE project looked at things that have been on the table for years, but it’s nice to get some extra detail. Unfortunately, many still seem to cling to the notion that most genome biology in humans is driven by proteins. ENCODE is curiously silent about the implications of the massive transcription of RNA and the signatures of functional organization across these non-coding regions, preferring perhaps to duck the question of whether it is all relevant or largely “transcriptional” noise.

    The intellectual and cultural problem is that if this non-coding RNA is functional—and all the emerging evidence points in this direction—the entire conception of gene regulation has to be reconstructed. The field has assumed for a long time that protein regulators, transcription factors of various sorts, drive the regulation of the system. But now we have to figure massive amounts of regulatory RNA into our understanding. Transcription factors are very powerful stage-specific effectors of gene expression, but my feeling is that much more information is required to supervise architectural organization—the shapes and positions of different muscles, bone and organs.

    Trusting gut feelings rather than established knowledge appears appropriate if you mistake noise for signal.

  17. profpedant says

    Another way to look at it is that Science needs good story-tellers. kevinalexander at Comment #2 identified one reason why the Encode story does not make sense. W. Ford Doolittle identified another reason why the Encode story does not make sense. Good story telling is not a matter of emotional stimulation or momentary fascination, good story telling is the product of a comprehensive and comprehensible explanation of the known facts. If Encode was right then the points that kevinalexander and W. Ford Doolittle raised would reinforce Encode’s analysis of its results instead of showing that analysis for the idiot mistake that it is.

  18. says

    We definitely don’t need a philosophy of science. Feynman is not alone, Dawkins, for one, is against the idea.

    Well, if Dawkins is against it……

    Philosophy is based in the humanities perspective of knowing, not the scientific (which knows via empiricism).

    But how do you know science knows via empiricism?

    Philosophers are basically debaters, like all in the humanities, without a basis to test to see which assertion is “correct”. If you think about it, philosophy has never given us anything of practical value or even provided an answer to a question.

    Yeah, it definitely wasn’t philosophers who came up with that empiricism thing you were so pumped about a minute ago.

    So, it cannot provide an answer to how to do better science. If anything is needed to create better scientists, a course in logic or scientific reasoning might be.

    Yeah, logic, y’know, that thing philosophers came up with and formalized.

  19. says

    Yeah, logic, y’know, that thing philosophers came up with and formalized.

    That’s part of the 90% of philosophy that’s bunk. Other stuff, like falisfiability, too. Oh, and skepticism.

  20. robotczar says

    Science knows by empiricism by definition of science. If you are asking a more metaphysical question about how anybody knows anything, then I will leave that to the philosophers and post-modernists, that is right up their alley–an endless debate without evidence.

    Philosophers do use logic (sometimes), but logic is insufficient to answer most questions. For example, it can’t tell us if our premises are true. Also, some logic is inductive, which cannot give us definitive answers and thus cannot provide the intellectual security we might like. Empirical evidence, which is also not definitive inductively, does provide us with at least some security (i.e., evidence) that are inferences are correct.

  21. Asher Kay says

    a good scientist has to be…a philosopher.

    Another way of saying this is that a scientist is a philosopher whether she likes it or not. So she has an obligation to be a good one.

    Philosophy has its own clash right now — between people who want to defend the dwindling territory of philosophy from scientists (people like McGinn and Nagel), and scientists who want to push the boundaries of natural philosophy (people like Deacon and the Churchlands). So I wouldn’t expect Philosophy itself to pick up the ball.

  22. says

    Science knows by empiricism by definition of science.

    I’m going to ignore the circularity here and simply note that science was defined, and empiricism constructed, by philosophers. And yet you continue to tout the wonders of science while claiming that philosophy has accomplished nothing useful.

    Do you have the same antipathy towards pure mathematics as you do towards philosophy? Because mathematics rests on the same apparently shaky grounds of a priori logical deduction.

  23. yubal says

    Some noncoding regions bind regulatory proteins, while others code for strands of RNA that regulate gene expression.

    Well, if a region of DNA encodes for a RNA molecule it is not a non-coding region. Nobody said it has to be mRNA to be biologically relevant. See tRNA/rRNA. And yes, it is true that we do not understand a couple of RNA functions right now. RNA is very interesting.

  24. robotczar says

    Can you say why you think that philosophers “constructed” empiricism? Some people invented empiricism. Early empiricists (i.e., scientists) were called natural philosophers. So what? Even by deductive logic, “some philosophers are scientists” does not make “all philosophers scientists”. These days they are quite distinct (though some who call themselves philosophers may be using empirical evidence). Who or what group “constructed” empiricism doesn’t matter at all. There is a clear distinction in knowing by empiricism and knowing by argument alone. Do you agree? If you want to argue semantics, then you are doing philosophy, not science.

  25. daniellavine says

    robotczar@14:

    We definitely don’t need a philosophy of science. Feynman is not alone, Dawkins, for one, is against the idea. Philosophy is based in the humanities perspective of knowing, not the scientific (which knows via empiricism). Philosophers are basically debaters, like all in the humanities, without a basis to test to see which assertion is “correct”.

    I’m sorry. This is incredibly ignorant. Philosophers of science have made some incredible contributions to the theory of knowledge and to understanding why the scientific method works as well as it does, and in many cases providing refinements to the method. Science today is not science as it was practiced by Newton and that is in no small part due to philosophy of science.

    @21:

    Science knows by empiricism by definition of science. If you are asking a more metaphysical question about how anybody knows anything, then I will leave that to the philosophers and post-modernists, that is right up their alley–an endless debate without evidence.

    You really are out of your element. “Empiricism” is a metaphysical concept. Go ahead and read up on it. If you want to build the castle of science on empiricism then maybe you should read some of the philosophical work on the subject to figure out whether you’re building on rock or sand.

    The answer is probably more like clay but you don’t have any idea why that is.

  26. says

    @ 25

    Even by deductive logic, “some philosophers are scientists” does not make “all philosophers scientists”.

    I never claimed that. I’ll try to make this as simple as possible:

    (1) You claim that philosophy has produced nothing useful
    (2) You approve highly of empiricism
    (3) Empiricism is a philosophical position originally invented and espoused by philosophers

    These three statements are not compatible. I’m not arguing that all philosophy is useful, or that all philosophers produce useful things, that’s certainly not true (and also certainly not true of scientists.) But you made a universal claim, namely, that philosophy has *never* given us anything useful. I provided a counterexample.

  27. says

    Snort.. Short version of the debate:

    Scientists: There is a lot of noise in DNA, but we are looking at the stuff that is actually signal.
    ENCODE: There is way too much noise for it to be noise, it must all be signal!

    The second one is, as for the book I have recently finished reading, the leading cause of everything from the continual failure to predict, a) earthquakes (we can’t even find a clear signal pattern), economics (lots of theories, including the “self regulation” model, which dance happily along while ignoring all other variables in the system), and politics (where the predictive quality is complete crap, in most cases, right up until a few days before an election), as well as any other situation where you have masses, and masses, and masses of data, and you either don’t know what to look at to find the signal in the noise, or you are looking in the wrong place. These guys, are not even looking, they are just throwing it all in a bucket, labeling it “signal”, and ignoring the reality that *everything* in nature generates stuff that is either worthless to make prediction, left over from other events, or, in some of the worst cases, only useful if you have another universe sized computer to simulate things in (which wouldn’t do anything but show you which stuff is noise anyway, if it is in fact “byproduct”).

    They might as well be studying tea leaves, as a means of predicting sun spots.

  28. robotczar says

    You know, if you call someone ignorant you need to do more than give your opinion. Can you like give some examples to cure my ignorance? I know I am asking for argument or evidence instead of opinion and that might be to scientific-like for your philosophy, but you need to support your assertions, especially if you want to call someone incredibly ignorant. I assure you I am not. So, now we have presented arguments (actually opinions) of equal weight.

    Your opinion is also not sufficient to support your assertion that empiricism is metaphysical. Neither is telling me to read up on it. You apparently have been reading a lot of philosophy. I don’t wish to waste my time any more on that. I just said it is a debate club where semantics rule. You are supporting that idea by wanting to claim that empiricism is metaphysical but even philosophers would offer a argument.

  29. alwayscurious says

    Wow, this guy’s awesome! Two of my favorite quotes from Doolittle:

    in reference to the widening meaning of “regulation”:
    “Pacemakers regulate heartbeats and that is their function: tasers and caffeine also affect cardiac rhythm, but we would not (at least in the former case) see this as regulatory function.”

    in reference to the “necessary” length vs.observed length: (eg. intron regulatory sequences)
    “My computer might be 5 ft from the wall socket, but if I have only a 10-ft electrical cord all 10 ft will seem functional, because cutting the cord anywhere will turn off my machine. In this connection, note that much
    more than one-half of 80.4% of the human genome that ENCODE deems functional is so considered because it is transcribed”

  30. Ichthyic says

    We need a sociologist of science…or a philosopher

    says PZ, a scientist, who managed to figure out what was obviously wrong with ENCODE without being a philosopher or sociologist.

    This very article stands as proof against concept.

  31. Ichthyic says

    …compare this analysis of ENCODE’S error to the one you published just last week on the article using what amounts to numerology to claim there is an intelligent signature in the genetics code.

    that article did not make any errors in calculation, as is the same with most numerology efforts.

    yet, there too it did not take a philosopher or sociologist to see what was wrong with it. Just someone who wasn’t as willfully ignorant about genetics in general.

    likewise here, ENCODE’S problem is a failure of assumption based on willful ignorance of how genetics applies across species. Has nothing to do with philosophy.

  32. daniellavine says

    robotczar@29:

    You know, if you call someone ignorant you need to do more than give your opinion. Can you like give some examples to cure my ignorance? I know I am asking for argument or evidence instead of opinion and that might be to scientific-like for your philosophy, but you need to support your assertions, especially if you want to call someone incredibly ignorant. I assure you I am not. So, now we have presented arguments (actually opinions) of equal weight.

    Don’t worry, friend, I know more about both science and philosophy than you do. So you don’t have to worry about anything being “to [sic] scientific-like” for me. I’ll note, though, that you’ve provided neither evidence nor argument to support your contention that philosophy has produced nothing of value nor your claim that empiricism is not a metaphysical concept. Science for me but not for thee?

    Your opinion is also not sufficient to support your assertion that empiricism is metaphysical. Neither is telling me to read up on it. You apparently have been reading a lot of philosophy. I don’t wish to waste my time any more on that. I just said it is a debate club where semantics rule. You are supporting that idea by wanting to claim that empiricism is metaphysical but even philosophers would offer a argument.

    You want me to support my position that empiricism is a metaphysical concept. You want me to “cure your ignorance”. But you’re not willing “waste your time any more on that”? Does not compute.

    Here is part of the wikipedia article for “metaphysics”:

    The metaphysician attempts to clarify the fundamental notions by which people understand the world, e.g., existence, objects and their properties, space and time, cause and effect, and possibility. A central branch of metaphysics is ontology, the investigation into the basic categories of being and how they relate to each other. Another central branch of metaphysics is cosmology, the study of the totality of all phenomena within the universe.

    I argue here that “empiricism” is one of the fundamental notions by which people understand the world. It would then, by this definition, quite clearly constitute a metaphysical concept. And the concept of empiricism was indeed constructed through philosophical argumentation. Examples: Hume’s “An Enquiry Concerning Human Understanding, Popper’s “Science as Falsificationism”, Quine’s “Two Dogmas of Empiricism

    Thus I have provided both argument and evidence that empiricism is a metaphysical concept and you have provided neither. Care to offer any argument or evidence of your own, tough guy?

  33. robotczar says

    If you care to hear what scientists have to say, you don’t even have to read up on it, though that might be a good idea.

    Play the following clip from about 1:03:50.

  34. daniellavine says

    robotczar@34:

    That is what two scientists have to say about it. Can you accept that their thinking on it is not necessarily definitive?

    That said, I don’t entirely disagree with these gentleman. I think PZ has it right in the OP. 90% of philosophy is crap but the stuff in the remaining 10% is incredibly important.

  35. daniellavine says

    @robotczar:

    You should take into account the fact that Daniel Dennett and Victor Stenger are philosophers and that the majority of academic philosophers are atheists.

  36. says

    We definitely don’t need a philosophy of science.

    lol. given that the idea of falsification comes out of philosophy of science, that’s an impressively dense thing to say.

    Philosophy is based in the humanities perspective of knowing

    dude, there aren’t “different ways of knowing”. wtf is this shit.

    Philosophers are basically debaters, like all in the humanities, without a basis to test to see which assertion is “correct”.

    honeycakes, I don’t know who taught you science, but you don’t do experiments to find out which answer is correct; you do experiments to weed out incorrect ones.

    If you think about it, philosophy has never given us anything of practical value or even provided an answer to a question. So, it cannot provide an answer to how to do better science.

    again: falsification is a concept that came from the philosophy if science. you are factually wrong here.

    If anything is needed to create better scientists, a course in logic

    HAHAHAHAHA
    dude, logic is part of philosophy.

    Even by deductive logic, “some philosophers are scientists” does not make “all philosophers scientists”.

    honeycakes, this is called a strawman, since no one claimed all philosophers are scientists.

    Who or what group “constructed” empiricism doesn’t matter at all.

    of course it matters, since it directly contradicts your silly claim that philosophy has never contributed anything; given that empriicism, falsification, and logic all are outgrowths of philosophy, you’re wrong.

    If you want to argue semantics, then you are doing philosophy, not science.

    actually, if you’re arguing semantics, you’re doing linguistics.

    Can you like give some examples to cure my ignorance?

    already accomplished.

    I know I am asking for argument or evidence instead of opinion and that might be to scientific-like for your philosophy,

    that’s adorable, given that you’re the one who came in here with unsupported opinion contrary to available evidence.

    Neither is telling me to read up on it. You apparently have been reading a lot of philosophy. I don’t wish to waste my time any more on that.

    you know what that’s called? willful ignorance. have fun with it.

    If you care to hear what scientists have to say, you don’t even have to read up on it, though that might be a good idea.

    whence the assumption that the people disagreeing with you are not scientists? or do you think “scientists” are people who work with science whom you’ve seen on TV?

  37. says

    I argue here that “empiricism” is one of the fundamental notions by which people understand the world. It would then, by this definition, quite clearly constitute a metaphysical concept. And the concept of empiricism was indeed constructed through philosophical argumentation. Examples: Hume’s “An Enquiry Concerning Human Understanding, Popper’s “Science as Falsificationism”, Quine’s “Two Dogmas of Empiricism”

    Don’t forget Locke’s Essay Concerning Human Understanding.

  38. dexitroboper says

    given that the idea of falsification comes out of philosophy of science, that’s an impressively dense thing to say.

    Practising scientists don’t give a rat’s about falsification. Read a bunch of science papers and nowhere does anyone try and use falsification.

  39. daniellavine says

    dexitroboper@41:

    Practising scientists don’t give a rat’s about falsification. Read a bunch of science papers and nowhere does anyone try and use falsification.

    True, and even philosophers of science don’t talk about falsification very much any more. One of the reasons for this is progress in the analysis of science within philosophy of science. I only provided a few examples, I didn’t show robotczar the good stuff.

  40. broboxley OT says

    Anecdotal, as a tech working in the bowels of a lab for a large pharma company in the late 1990’s I met a lot of scientists that were excellent technicians. Some of the true scientists were very happy to explain the science behind what they were looking for and were interested in explaining the science as well as politely listening to suggestions and even adopting a few of them.

    Having read many articles about encode it seems to me that determining usefulness of the signal/noise needs to be determined by mapping what chemical reaction would turn these items on so to speak then to map what potential usefulness they may have.

  41. Azkyroth Drinked the Grammar Too :) says

    …just think, maybe scientists doing philosophy more might be useful for pruning away some of that same masturbatory crap.

  42. John Morales says

    [OT]

    Azkyroth:

    …just think, maybe scientists doing philosophy more might be useful for pruning away some of that same masturbatory crap.

    Just think: if so, it’s the philosophy being done which prunes away some of that same masturbatory crap.

    (You admit it takes philosophy and not science to correct philosophy)

  43. GodotIsWaiting4U says

    I’ll agree with the claim that 90% of philosophy turns out to be masturbatory crap (I do feel like the “like everything” qualifier is important, if only to remind everyone that philosophy is not uniquely junk-filled). I do this as an actual philosophy major; just about to get my bachelor’s in it. And I do see a lot of my fellow students in the field who seem to sneer at science a bit (to me it looks like a knee-jerk reaction to perceived sneering at philosophy that comes from…pretty much everyone else, actually). The field is poisoned with people who don’t get it: science is philosophy’s most successful child, and every philosopher should love science for the same reason every good parent loves their children. The way I see it, the two fields SHOULD be on much better terms than they are, since they’re working towards the similar goals with basically the same methods. We should be seeing both fields offering well-intentioned correction and ideas to one another.

    From a certain perspective, science is still a branch of philosophy, albeit a very BIG branch: it uses the same basic ideas, with a given set of assumptions (for example, the assumption that empirical observation is a reliable way to gain knowledge). Philosophy of science just works on checking and re-checking the reasoning behind those assumptions (since you can’t very well empirically test the validity of empiricism; you need empiricism to already work to do it) to make sure that the job is being done right. Philosophy’s just dealing with questions that are harder to figure out. You get peer review in academic philosophy too; it’s just that since you can’t empirically test the claims, you have to sift through the reasoning and find the problem that way, which is a great deal harder and means it takes a lot longer to overturn something, let alone overturn it decisively.

  44. yubal says

    It always confused me a little when people were talking about “Junk DNA”. Non-coding DNA is a less confusing term but nevertheless misleading. Just the word “non-coding”, and also the name ENCODE, sort of dictate that this investigation will be about the coding capacity of DNA. While it is true that DNA does code for all the relevant macro molecules (RNA/protein) and has little other chemical function I doubt that this is the whole picture because there is also a physical role that has biological relevance. Let us talk structure for a moment and forget about transcription and genes.

    DNA is a polymer made from biological building blocks. It does exactly what polymers are supposed to do. As a single strand it adopts random conformation in solution as double helix it behaves much more rigidly. Until recently double stranded DNA was assumed to follow the worm-like chain model which makes a couple of prediction. One of them is that DNA below ~150 basepairs becomes stiff. In other words, DNA below this persistence length of 150 bp can not be pulled into a complete loop anymore. This has been shown to be wrong (DOI: 10.1126/science.1224139). DNA can bend back onto itself in far shorter stretches, even in the absence of proteins.

    Why is that relevant for the existence of non-coding DNA? Simple. When coding regions are read out in the cell the DNA polymer has to adopt suitable confirmations. Think topology for a moment and keep that thought. If DNA is really that bendable it can itself arrange in conformations on a local scale that can re-modulate the polymer on a larger scale. This property is also dependent on the content of A/T to G/C in the particular region where G/C is more rigid.

    Genes downstream of a certain signaling event are separated around the genome and have to be read out simultaneously and also together with other genes that are constitutively open or part of the same or another signaling cascade. Assembly of the required protein complexes onto the DNA could lead to severe clashes. There are at least two ways to bypass this situation. A) Having a protein machinery that is insensitive to the steric problems the DNA has on a global scale or compensates for it, or B) The correct DNA spacing length that would allow for a topology that allows for the simultaneous expression of several genes.

    The most important factor is the time scale on which the DNA can arrange into the required topology. More flexible DNA (see above) and a suitable topology of protein-DNA give the cell a cutting edge on the nano- to microsecond timescale that can be crucial for survival. This would be the putative selection criterion. How can the cell react to signals without cutting down other responses?

    On the other hand a little more unsuitable topology can prevent a highly controlled gene from over-expression. The cost of enlarged genomes is actually not that huge as one would think. Mutations in the DNA reading protein machinery has to compete against the incorporation/deletion of non-coding DNA that benefit the current protein machinery and their topology on the DNA.

    It is quite difficult to imagine how it looks like on the molecular level when a cell is under stress but those rare events are a massive accumulation of selection pressures on the molecular machinery of the cell. An interpretation like this might be helpful to unravel the C-value enigma (why some species have rather huge and others rather tiny genomes). The DNA size and the position of the genes within the genome co-evolves together with the proteins that maintain the genome and read out genetic information. When having a protein machinery that requires more space between clusters of genes you end up quite rapidly with huge genomes.

    So yes, please stop looking DNA purely as a storage molecule for genetic information, it can move fast and swift and it does what all good polymers do in the first place, it excludes solvent accessible volume.

  45. yubal says

    The field is poisoned with people who don’t get it: science is philosophy’s most successful child, and every philosopher should love science for the same reason every good parent loves their children.

    Great statement.

    To honor their parents, they still call them “PhDs” in science.

    [Although (good?) parents also love their "unsuccessful" children.]

  46. Ichthyic says

    Practising scientists don’t give a rat’s about falsification. Read a bunch of science papers and nowhere does anyone try and use falsification.

    True, and even philosophers of science don’t talk about falsification very much any more.

    actually, no, this is not true.

    We use falsification all the time. It’s built INTO the experiments. Every field experiment I have EVER done has been designed with falsification of not only the null hypothesis, but hopefully at least most of the proposed competing hypotheses as well. Otherwise, there will be endless criticism the likes of: “oh, you didn’t include this variable in your experiment, which clearly could be an alternative explanation for the behavior we see in your results.” My fucking god, how many times did I hear that during the first experiment I published. It took 3 field experiments and 20 revisions to finally falsify all the competing hypotheses.

    so, no, someone who says there is no falsification in science doesn’t actually understand what they are reading. Often times, the falsification is built directly into the experiments, and is just as commonly left out of the discussion of the paper, because these are falsifications everyone in that particular field takes for granted.

  47. Ichthyic says

    …my guess as to why casual readers somehow conclude falsification doesn’t happen, is simply because that particular philosophical jargon is often abandoned in the text in favor of more positive and engaging jargon like: “hypothesis supported by these significant results”, for example. It’s still falsification though.

    If you want to understand why particular viewpoints and jargon are used, you’d have to ask the primary grant agencies for the particular field in question, and also note that style is somewhat dependent on which journal you plan to publish in as well.

  48. says

    Right. We don’t write up the dead hypotheses.

    Every grant I’ve ever written I’ve had to set up all the alternatives. Here’s hypothesis A and B. Here’s my proposed experiment. If I get result X, it means A is false and B is supported. If I don’t get result X, it means B is false and A is supported. And then I will write up whichever hypothesis is supported.

    That’s what granting agencies want, that whatever result you get will lead you in some direction, and prune the theoretical model of some failed hypotheses. You don’t get funded for writing, “If I get result X, it means A is false, but if I don’t get X, I don’t know what I’ll do.”

  49. Ichthyic says

    That’s what granting agencies want, that whatever result you get will lead you in some direction, and prune the theoretical model of some failed hypotheses. You don’t get funded for writing, “If I get result X, it means A is false, but if I don’t get X, I don’t know what I’ll do.”

    What really sucks is when you do all the work to falsify every extant hypothesis there is regarding a particular phenomenon, and then nobody, including yourself, can come up with any good explanation for it. Ends up being a nice review article, but yeah, grant money gets kinda limited, which is unfortunate, as those are exactly the kinds of cases that need more work to figure out where to go.

    The journal of negative results never gets enough press in my opinion.

    ah well, que sera sera.

  50. Air says

    Perhaps part of the problem is that the true philosophy underlying the current practice of science in the US today is the philosophy of the granting agencies that enable work to go forward. They define good science as expenditures that are highly likely to lead to significant results. Unfortunately, this notion leads inexorably to massively scaled-up routine experiments as primary drivers for research. The ‘omics’ efforts, of which ENCODE is one, are perfectly valid exercises in data-gathering and development of data analysis tools that promise substantial empirical results (such as number of nucleotides sequenced) per dollar of grant money spent. This is extremely appealing to grant agencies – it is basic Kuhnian ‘normal science’ and politically safe given that quantitative milestones are readily generated to assess the project.

    Where it hits the fan is in the ‘promise’ of these massive experiments; that merely accumulating data will inexorably lead to progress in (name your disease or syndrome). Again, this is a philosophical stance not necessarily of the scientists involved, but of the granting agency. It is this ‘promise’ that ENCODE is trying as hard as they can to fulfill in their clearly flawed interpretations.

    Should we sequence our genome just because it is part of the essential definition of being human? Should we go to Mars if there isn’t life there? The philosophy of most granting agencies is – no. The over-interpretation of the ENCODE results seems to me to be a fundamentally human (i.e. sincere and often mistaken) effort to conform to NIH’s philosophy of funding and the natural desire to see your work as contributing meaningfully to well-being.

  51. oeditor says

    Sociologist of science? Be careful what you wish for – you may get Steve Fuller! Mind you, you might have to bid against the Discotute. :-).

  52. David Marjanović says

    And there’s your answer, there are two strings to compare to, the complem[e]nt[a]ry strand and the other copy of the same chromosome. It’s not a perfect mechanism, but it does take out lots of errors.

    Sure, but it can’t recognize pseudogenes or rotting retrovirus corpses or repeats and cut them out.

    When viewing biology in such a way, it instincively makes sense that ‘Junk DNA’ has to have some function, otherwise why have it there in the first place?

    Because, see above, there’s no way to get rid of it. You have to wait for random deletions.

    Bacteria can have super fast reproduction cycles, so copying that DNA every time adds up when you think of it in the exponential terms. Yet bacteria have ‘junk DNA’ too.

    As expected, they have extremely little to none. Only eukaryotes can afford to have lots of junk DNA, and only eukaryotes ever have lots of junk DNA.

    DNA is lost all the time in reproduction, so it’s not like you’d have to have new enzymes to specifically remove stuff. Yet the DNA remains.

    Uh, no. It’s lost at the expected range of rates. It’s just gained at a widely overlapping range of rates, too.

    Well, if a region of DNA encodes for a RNA molecule it is not a non-coding region. Nobody said it has to be mRNA to be biologically relevant. See tRNA/rRNA. And yes, it is true that we do not understand a couple of RNA functions right now. RNA is very interesting.

    You know, the RNA generated by transcription of junk DNA is destroyed pretty quickly. Not only is there junk DNA, there’s junk RNA as well. Yes, there’s miRNA, snRNA, siRNA and so on, but the genes for those don’t amount to much. Read the Graur paper on ENCODE.

    Practising scientists don’t give a rat’s about falsification. Read a bunch of science papers and nowhere does anyone try and use falsification.

    …uh.

    In that case I’d appreciate a bunch of references. Beyond the fact that a lot of parsimony is built into falsification, I have no idea what you could be talking about.

    When having a protein machinery that requires more space between clusters of genes you end up quite rapidly with huge genomes.

    …But how and why would protein machinery evolve to require such wildly different amounts of space (keep all those bacteria in mind that have no or practically no junk DNA), and what mechanism could “rapidly” bloat the genome to “huge” sizes?

    And what about viruses? They can often be said to have negative amounts of junk DNA (they have overlapping genes & stuff), and yet they’re reproduced by the protein machinery of host cells that often have lots and lots of junk DNA.

    To honor their parents, they still call them “PhDs” in science.

    No, not in science, but in English. I am not a doctor of philosophy, I’m a doctor of natural sciences in German and a doctor of a particular university (not a field, but a university!) in French.

    (It’s one doctorate, cosupervised by two universities.)

  53. yubal says

    …But how and why would protein machinery evolve to require such wildly different amounts of space (keep all those bacteria in mind that have no or practically no junk DNA),

    Bacteria have less complex DNA metabolizing proteins than eukarya. Compare RecORF pathway to the BRCA2 pathway, think size. I forgot to mention that above, but I do not only consider transcription here but also genome maintenance/repair. The more proteins you involve in a complex, the less favorable DNA conformations are available since protein-protein interaction potentials are comparably narrow. All has to work out with the same genome. Also, bacteria are single celled meaning they use all their genes at different stages in one very short life cycle in one cell whereas complex eukarya like us have up to hundreds of cell types with individual signaling cascades and all of them need to target the same genome in every single cell type. Bacteria can transcribe multiple genes on one mRNA whereas eukarya usually do not do that. That means one operon processing machinery assembly suffices for several genes in bacteria, but we need one for each gene. Check out yeast and have a look, what do you see? Also, bacteria have no means available to compress their genome like eukarya, they lack histones. This restricts the size of genomes they can possibly carry. (see below viruses)

    and what mechanism could “rapidly” bloat the genome to “huge” sizes?

    There are dozens of mechanisms that can introduce new bases into the DNA polymer. Mistakes in DNA repair for example when the non-homologous end joining machinery initiates gap fill in without template. This would also have another neat implication for tandem repeat regions that provide micro-homology the NHEJ machinery can exploit (there are papers out on this if you are interested).

    And what about viruses? They can often be said to have negative amounts of junk DNA (they have overlapping genes & stuff),

    True. I remember seeing 4 frames used at the same time although it would not astonish me the least to see a bugger using all six frames.

    Before I write too much here, check out the differences between RNA and DNA viruses here, there are some interesting implications. Also, even the Lambda or T-even phages do have some “junk” in between their genes although they also come up with overlapping operons. Also check up on the usage of the respective genes within the infection cycle, some viral genes fire up early and completely shut down later when other genes are opened up. There is not necessarily a steric conflict here. Also, after mid term infection there are already multiple copies of the viral genome present in some bugger which would solve the steric problem by an increased number of templates. Mammals can’t do that.

    and yet they’re reproduced by the protein machinery of host cells that often have lots and lots of junk DNA.

    Yes, and the viruses have a limited amount of space available they can use to stuff in genetic material. Classic trade-off situation. Also, T-even phages enforce strict processing of their own genetic material by expressing a sigma factor that forbids the host RNA polymerase to bind to host DNA. I can’t remember how viruses infecting mammals are handling that (if at all, technically they don’t need to), they all do however suffer from very limited space in the capsid. The ultimate selection criterion. Larger capsids can severely decrease chance of host infection.

  54. yubal says

    and what mechanism could “rapidly” bloat the genome to “huge” sizes?

    I am sorry, I missed the point, again.

    It is as “rapid” as evolution always works. Quite fast if you’d ask me. We are always looking at the outcome of evolution, not at the mechanism itself. This remains true when we wonder about the genome sizes we see. Accumulation of mutations in proteins or changing the size of the genome, why would an organism bother? Both can happen. What works that works and what is left over is what we see.

    BTW:
    Has anyone ever made an analysis of the size of genomes versus the evolutionary heritage? Are there clusters/outliers?

  55. ssepsenw says

    This article, referred to me by another person unfamiliar with genomics and epigenetics, is a great example of “a little knowledge is a dangerous thing”. A very little knowledge. You are trying to infer from naive first principles. So-called “junk DNA” — a silly term to display one’s ignorance of the many functions of DNA — is composed of inter-gene DNA and introns. “Genes” have been defined as open reading frames which give rise to messenger RNA [mRNA], the cytoplasmic template for linking amino acids into proteins. Introns are the sequences which, though in the original mRNA, do not find themselves in the final mRNA product that appears in the cytoplasm. Intergenic DNA are the sequences that lie between identifiable open reading frames genes. Here are the problems with a simplistic genes vs. junk DNA.
    1. ALTERNATE SPLICING. One gene may give rise to more than one mRNA, and, therefore, more than one protein product. This is the result of alternate splicing, that can result from a complex cell-signalling system. So, while we have about 30,000 open reading frames in our genome, we actually have many more gene products possible.
    2. NON-mRNA’s IN THE NUCLEUS. Inter-gene DNA can give rise to RNA’s that are not mRNA. Aside from ribosomal RNA’s and transfer RNA’s, we now know there are myriads of small RNA’s (miRNA, siRNA, others) that can turn on or turn off genes and change everything we knew about gene regulation and development.
    3. STRUCTURAL FUNCTIONS OF INTERGENIC DNA. DNA transcription to form mRNA’s does not occur in a random fashion implied by those who consider DNA just spaghetti in a bag. Studies have shown that the transcription of specific mRNA’s occurs in a specific part of the nucleus. Obviously (to me), there must be structural platforms on the nuclear envelope to “pin down” the enormously long strands of DNA for transcription, and there must be non-coding sequences for these DNA-binding proteins to associate with. There are also identifiable intergenic sequences that associate with histones during the fancy packaging process required to form the condensed chromosomes that must precede cell division.
    And much more…Get the idea? There are lots of functions for non-mRNA transcribing DNA.

  56. says

    This article, referred to me by another person unfamiliar with genomics and epigenetics, is a great example of “a little knowledge is a dangerous thing”. A very little knowledge.

    This article is written by someone that studies genetics, as in what they actually bloody do, not “data”, which seems to be pretty much the only thing ENCODE does. He has also posted a previous article, with video (I hate doing searches, and last time I tried I never found the article I wanted, so.. if someone else has the link, or can find it?), in which he describes the number of copies of LINE and SINE viral fragments in the DNA, as well as the absolutely absurd number of repeats of stuff between “coding”, never mind “regulatory” genes sections. Sections which are not read “in place”, but get “clipped out, spliced back together, then processed”, every time they need to be used, because all the stuff in between just takes up space, and keeps growing bigger, for no clear reason.

    All ENCODE can claim is, “A lot of it gets transcribed.” Uh.. so what? This is like claiming that a program, when it loads a dll, on a computer, must be using the “entire” dll, even when the dll contains old calls, sections of code that are not ever called by anything, even internally, and even whole branches of program code that will “never” execute, since they where added to deal with errors, or the like, which, due to the way the rest of it works, are actually impossible to trigger. Oh, right, not to mention “data blocks”, which may be included, some of which may contain data that is never used, never seen, and never processed. But, heh, its all “transcribed” from the disk to the computer memory, and if you make a copy of the program, the precise location of all of those things can cause crashes, if you don’t keep them the “exact same size”, so.. its all “useful”, therefor “important”.

    Its just too bad that DNA/RNA don’t have “compilers”, some of that stuff you can strip out, and adjust entry points on, if the compiler can figure out that, yeah, in fact, the “source code” has things in it that don’t do jack all of anything. But, oh, wait.. DNA/RNA isn’t compiled. Our “dynamic link libraries” in genetics are “jit” compiled. I.e., its all source code, and when executed, is only “translated”, i.e. “transcribed” when needed, even if the result, again, doesn’t actually amount to much more than, say:

    null dummy()
    {
    return;
    }

    It still gets jit compiled and “executed”.

  57. says

    Yeah, and on some hardware, it can be critical for timing, but, not on anything “recent”, in most cases, where the clock speeds are far more likely to differ, and thus the execution time for that instruction is “unknown”. But, we are not talking about NOP. We are talking about something more like:

    push x
    push y
    push c
    call :dummy:
    pop c
    pop y
    pop x

    :dummy:
    return

    A lot of wasted time. doing basically nothing, not even NOP. But, a “dumb” compiler would never notice that it doesn’t do anything, and so, include it anyway. Same goes with someone inserting, “If you are reading this, why?”, in a data block, or the like, just so that, when it compiles, anyone looking at the code will find that sentence. A program will run fine, as long as the logic is sound, even if its got 5K of actual code, and 1MB of pure random junk in it. Neither the program, nor the computer cares.

    Mind, in the case of DNA/RNA, we are talking about comment blocks, and/or white space. Some languages are real strict, but others… you could have a line of code that has 80 different variables, commands, function calls, and logic tests in it, and place random amounts, like 5-20 pages of white space in between each actual code fragment, and other than slowing down the compiler, which has to convert all that stuff into usable data, which it then turns into byte code, those languages simply wouldn’t care, at all, what was in there. In fact, you get something messier, like HTML, or XML, and not only will most applications just skip dead parts, and try a “best I can” approach to displaying the resulting page, but you can create comment blocks, “in the middle” of most lines, and have them simply skipped over, while the execution process hunts down the next part of the command, 2-3 words, a paragraph, or an entire book chapter, later on. It just doesn’t care how much you fowl up the code, as long as it can find “something” that will execute, even if the only working parts are the first entry, specifying which type of file it is. I imagine DNA/RNA is similarly blind, simply executing what ever it gets handed, as long as it has the right markers, hence the ability of viruses to hijack the process. Anything that doesn’t do something, or does something which never completes, etc., there are “garbage collection” functions for, which break down the left overs, and already transcribed and executed sections, back into component parts. Unless the left over is actually toxic, odds are it just going to either a) get borrowed by some process that does need it, or b) get stripped down, and its base chemicals reused, expelled from the cell. I.e, it never “does” anything, because it never gets far enough into the process to do anything.

    Of course, that is mostly speculation on what happens in such cases, but… it makes way more sense in a messy system, like the genome, than the absurd assertion that it all “does something”.

  58. says

    Feynman had a good comment on the type of thinking done by the people in the ENCODE project:

    “The first principle is that you must not fool yourself and you are the easiest person to fool.”

  59. Useless says

    Look at the bight side. ENCODE may at last explain why alternative medicine works so well. At last, we’ll know why homeopathy is so successful. (‘Successful’ is defined by how much money it rakes in — like ENCODE.)