It is a pedestrian observation that people appreciate criticism much more if it is not directed towards something they relate to. In my last essay, I concluded that replicator<-->vehicle is actually very limited paradigm, and received an answer from Mr.Deutsch that ended with conclusion: "Hence the unavoidability of the von Neumann replicator-vehicle mechanism and the futility of attributing evolution to the vehicle."
That mechanism may be mathematically provable necessity to describe accurate self-reproducer, but it also may be totally agnostic with respect to mechanism that causes the inaccuracies, ie which drives evolution. Or is it also mathematically/logically provable that inaccuracy in self reproduction must be random in its nature? Is it prescribed by this model what causes variation, or is it open for some other model to give an answer to that question? In fact, already the choice of the term variation vs inaccuracy has different connotations and implies or suggests the answer. And what precisely and concretely is the vehicle, at all? Is it a molecular machinery (V=C+B) that copies DNA and synthesizes proteins and folds them to the right shape (that would include chaperones in addition to what I previously said B comprises), and coordinates both phases of building a new cell? Or is it a cell itself, minus DNA? Or is it a whole (multicellular) organism minus DNA? Or is it "anything that allows passing replicators from one generation to another allowing possible (random?) modifications in the process"? Or is it just a marketing term that sells popular science books, and does nothing useful to science, apart from popularization? Because, some people who have much better intuition than me regarding that matter, built during years spent in molecular biology labs doing actual research, tend to think that entirely random mechanism of (point) mutations would be far to slow and discoordinated in its search through a vast space of possible solutions for an improved DNA, that would be adequate to the needs currently imposed by environment. And the speed at which these needs change and arise is also not constant, but subject to wild fluctuations. There are several objections to the idea that variations must be gradual, slowly accumulating, and random. First one is an arbitrary idea that Earth is not old enough to allow such a slow process to build such advanced life forms that appeared. I disregard that objection, because I doubt that anyone can present some logical proof that could show that's true, and I doubt that anyone can have proper intuition about that matter. Another one is that conditions on Earth changed too rapidly in many cataclysmic occasions for some species (one of them being produced right now by homo sapiens, from many other species point of view, including our own, it could be a cataclysm for us too) that organisms must have had some other way to evolve rapidly in order to be able to survive. That seems to be the real challenge to anyone who sticks to the idea of randomness, to explain how is that possible, intuitively and logically. In fact, disregarding that question may be a little bit a sign of intellectual dishonesty. Last one is the most important, as it questions possibility in principle of gradual accumulation of genetic changes over continuum of many generations to produce discrete change in a phenotype that must appear at once, to be viable and effective. In its essence, evolution is a discrete process, that deals with digital information variables, otherwise error-correction would be logically impossible. Not to mention that idea that something that is in principle deleterious, and against which exists a mechanism of error correction, such as DNA repair, must be the only way for progress and advancement, that idea is in fact bizarre. Although that error detection mechanism probably can be suspended or altered in a clever and controlled way, by the cell, to allow for a faster search, and it probably cannot detect all possible types of alterations, besides blatant ones, such as point mutations, due to an imperfection of its design (not enough redundancy!), so maybe it doesn't require that much cleverness to suspend that mechanism anyway.
I watched videos on youtube that are part of Denis Noble's "Dance to the Tune of Life" promotion tour, and what first struck my attention was "harnessing the stochasticity" point, that he and Raymond Noble recently singled out into a series of separate articles, such as Harnessing stochasticity: How do organisms make choices? and Was the Watchmaker Blind? Or Was She One-Eyed?. Disputing blind randomness is probably the most important point of objection to the neo-Darwinism from the third way point of view. In the famous debate "is there plenty of time for evolution, or is there not", that happened somewhat earlier, between neo Darwinists Wilf and Ewens, a biologist and a mathematician at the University of Pennsylvania on one side, and Intelligent Design proponents Ewert, Dembski, Gauger and Marks on the other side, the former were trying to prove that the random search performed in phases (rounds of guessing) in localized areas of a genome doesn't require that much effort and time to yield results, and the latter were just trying to prove that this isn't viable and possible at all. Here are both articles: There’s plenty of time for evolution and Time and Information in Evolution .
Noble's argument is that similar search is observable in immune system's somatic hypermutation process, in which the rate of somatic mutation is 105-106 fold greater than the normal rate of mutation across the genome, and in which only certain loci are actually allowed to be altered. So, although DNA repair gets suspended to a certain extent, in order to allow such a high rate of mutation, some extended controls are turned on, that don't allow disastrous mutation. See the paper Does DNA repair occur during somatic hypermutation? by Saribasak and Gearhart for more details.
However, as Noble himself says, if other somatic mutations happened at the same rate, it would normaly lead to disaster. In fact, mistargeted somatic hypermutation is a likely mechanism in the development of B-cell lymphomas and many other cancers too, so the question is then how is this relevant explanation for evolution of multicellular organisms? It is relevant to show how the cell purposefully can affect DNA, so in theory it could happen not just as immune system mechanism.
Finally, there is Edward J. Steele (whom Noble doesn't mention at all) who claims that when successful somatic cell changes occur, copies of the copious new messenger-RNA that have been produced by the successful cells are picked up by harmless retroviruses acting as gene shuttles and transported across the tissue barrier – the Weismann Barrier – to the germline, and the new genetic information is integrated into the DNA by a process involving reverse transcription.
So, after the solution is found, it should be written to a persistent and inheritable memory, in order to allow passing of acquired characteristics to next generations.
Somatic hypermutation process is mainly in the form of single base substitutions, with insertions and deletions being less common, but there is also somatic recombination which happens in the form of large-scale alterations of DNA, such as chromosomal translocations and deletions and not in the form of point mutations.
It is worth mentioning the idea that not only genome change, but primarily behaviour change is a regular response by organisms when faced with problematical choices. The more sophisticated (and quicker!) latter response, the less need for the former. That's why it is highly unlikely people will ever again grow dense hair over the whole body, from the moment they realized how to make clothes, and how to build houses and other shelters needed to survive in cold regions. It is also sure that in certain cases, even change in behaviour cannot be quick enough to avoid extinction, let alone the genetic change.
Another, earlier response from Mr. Deutsch, to the first essay about the origin of information, to which I would like to additionally respond now, was this:
>>
You say " if the object Y can be constructed by a certain programmable constructor solely from information contained in object X, by blindly and precisely following these instructions without introducing any novel information itself in the process of construction, ie without any inventiveness or creativity, then the information contained in the object X is sufficient to describe Y".
I think this can't be quite right since "a certain programmable constructor" might already contain (without having to generate it anew) some information needed to make Y. It might be that this information PLUS that in X, is needed.
<<
This response would be much more understandable and justified, if it wasn't for just another instance of chicken or the egg dilemma. Namely, in our concrete case, programmable constructor is not independent from the object X, ie replicator. That "certain programmable constructor" is also part of the object Y, ie vehicle, built in previous generation step, and as such, all information it contains, is originated from the older instance of object X. Which means, that our concrete case confirms the idea that information contained in X is sufficient to build Y, and can only undergo decompression in the process of programmed construction of Y.
I noticed also that neo Darwinist mantra of random mutations and natural selection has another weak point. As a convinced naturalist, I reject the idea of existence of supernatural processes, and as a convinced evolutionist (who however doesn't know yet exactly how it works), I reject the idea of existence of artificiality, as being too artificial (however strange might that sound). Namely, one instance of the latter idea is also too anthropocentric for me, the one which defines artificiality in terms of human intervention into "natural processes". What I'm saying is that attribute "natural" should be omitted when talking about selection, as it is redundant, since there is no counterpart or alternative to nature, when we are talking about reality, ie not imagination. Or replaced with the attribute "environmental", as it makes some sense, unlike that what is usually used. I don't know is it just bad scientific style of expressing ideas (redundant, instead of concise and precise), or is there something more to it. But it is similar kind of objection to that, that I had already expressed in The Revision of Origin of Information for defining generic resources as "naturally occuring substrates", since there can't be any other way of substrates occuring, but natural.
When writing essays, I try to be critical about the work that is interesting to me. So, I should be self critical a bit too. In the revision of origin of information, I couldn't recall where is rigorous definition of programmable constructor written, and it is written in Constructor Theory of Life:
>>
In general, a programmable constructor is a constructor V that has, among its input substrates, an information medium M that can have any of the attributes P in an information variable, with the property that M with any of those information attributes is itself a constructor. The information
instantiated in M is an abstract constructor - an instance of information with causal power". V [P] is a constructor for the task TP, P is the program for the task TP and TP is in the repertoire of V .
<<
I knew I read it somewhere, but I couldn't find where it was. Now that I read it again, is there an instance of information without causal power? Is there a power that is not causal? Also, the question about abiogenesis being a task or not, was excessive, because the whole point of the proof is that to avoid an infinite regress, the final subtasks to which it was decomposed, should have been elementary tasks, performed by generic resources.
And finally, rethinking a bit, from the fact that programmable constructor is not independent from X, ie that it was constructed based on information contained in the older instance of X, it doesn't follow necessarily that all information it contains is originated from the older instance of X. In fact that doesn't change much, it could still be that some information contributed in previous generation step by the older instance of programmable constructor, was never contained in any instance of X. In other words, some information important for life, that is encoded in ribosome, may have never been encoded in DNA. But then, how does that confirm DNA-based picture of evolution?
Also, one can argue, that it is not true that organisms must have had some other way to evolve rapidly in order to be able to survive, considering the fact that the majority of species that ever lived on Earth, actually did not survive. They could have become extinct due to cataclysms provoked by other species who suddenly appeared and started to behave aggresively pushing them to the edge, or due to climate changes potentially caused by a meteor impact, or for whatever reason that required timely response that never occured. And cataclysms could have been more or less selective in affecting only certain species. But still, the question is what was the difference between those who survived and those who did not. Is it only a fortune, in the form of beneficial random mutations, that they both passively awaited, and some were lucky that it happened to them and some were not, or is there a molecular mechanism that actually allows an effort in searching for a genomic solution. The same question one can ask with respect to the change of behavior, as a solution to the problem. Does it occur randomely, or is it a result of an effort? Also, the question regarding genomic and behavioural changes can be asked not just in the context of responding to threats, but also in the context of gaining advantages, that may threaten others.
Comments
Post a Comment