Epigenetics and Neo-(Neo-)Lamarckism.

A very brief comment on a complicated topic…

New Scientist has a story in the current issue about epigenetics — differences in gene expression that are not due to changes in the gene sequences themselves — and how non-genetic variation can be both influenced environmentally and, in some cases, inherited.

The New Scientist story, which is entitled Rewriting Darwin: the new non-genetic inheritance, is another example of the “reporting on a revolution” and “underdog vindicated” fallacies so common in science reporting.

 Half a century before Charles Darwin published On the Origin of Species, the French naturalist Jean-Baptiste Lamarck outlined his own theory of evolution. A cornerstone of this was the idea that characteristics acquired during an individual’s lifetime can be passed on to their offspring. In its day, Lamarck’s theory was generally ignored or lampooned. Then came Darwin, and Gregor Mendel’s discovery of genetics. In recent years, ideas along the lines of Richard Dawkins’s concept of the “selfish gene” have come to dominate discussions about heritability, and with the exception of a brief surge of interest in the late 19th and early 20th centuries, “Lamarckism” has long been consigned to the theory junkyard.

Now all that is changing. No one is arguing that Lamarck got everything right, but over the past decade it has become increasingly clear that environmental factors, such as diet or stress, can have biological consequences that are transmitted to offspring without a single change to gene sequences taking place. In fact, some biologists are already starting to consider this process as routine. However, fully accepting the idea, provocatively dubbed the “new Lamarckism”, would mean a radical rewrite of modern evolutionary theory.

The article includes an interview with Eva Jablonka, who is one of the Altenberg 16 who are, as we speak, revolutionizing evolutionary theory (just kidding, but see here for a summary of what she had to discuss).

So, is this “neo-Lamarckism” and is it going to require “a radical rewrite of evolutionary theory”? Some thoughts:

1) There already was a neo-Lamarckism in the late 1800s and early 1900s. So this would be neo-neo-Lamarckism if anything.

2) Epigenetics, by definition, involves modifications of the expression of genetic systems. If heritable, they would be subject to natural selection, drift, etc., when they arise within a population. So, while this is certainly interesting, it will be a welcome expansion of existing theory rather than a revolution.

3) The inheritance of acquired characters was not original to Lamarck (it was the predominant view in his era), and in any case this by itself does not make a theory of evolution “Lamarckian”. Lamarckian evolution a) considers adaptation as the result of use and disuse in response to need b) leading to enhancements of particular features that c) improve an organism’s fit to its environment, which d) are then passed on and e) accumulate in each generation, f) leading to progressive increases in complexity, with g) no extinction, and h) simple forms produced anew by spontaneous generation.

4) Lamarck rejected the notion that the environment would directly affect organismal traits — the point was that organisms responding to the environment led to adaptive changes that were passed on.

It is now necessary to explain what I mean by this statement: The environment affects the shape and organization of animals, that is to say that when the environment becomes very different, it produces in the course of time corresponding modifications in the shape and organization of animals. It is true, if this statement were to be taken literally, I should be convicted of an error; for, whatever the environment may do, it does not work any direct modification whatever in the shape and organization of animals. [Translated as in Kampourakis and Zogza (2007)]

One person who postulated heritable, acquired, undirected variation was Darwin.


See also:

The imaginary Lamarck by M. Ghiselin

Shades of Lamarck by S.J. Gould

Students’ preconceptions about evolution: How accurate is the characterization as “Lamarckian” when considering the history of evolutionary thought? by K. Kampourakis and V. Zogza

The early history of the idea of the inheritance of acquired characters and of pangenesis by C. Zirkle


The Skeptical Alchemist posted this video, which has some significance for me since I was in Hiroshima less than 2 weeks ago.

Say what you want about the need to end the war, the expected casualties during an invasion, or whatever other rationalizations you like. But consider this question, by Leó Szilárd:

“Suppose Germany had developed two bombs before we had any bombs. And suppose Germany had dropped one bomb, say, on Rochester and the other on Buffalo, and then having run out of bombs she would have lost the war. Can anyone doubt that we would then have defined the dropping of atomic bombs on cities as a war crime, and that we would have sentenced the Germans who were guilty of this crime to death at Nuremberg and hanged them?”

Remembrance Day.

In Canada, as in many countries around the world, November 11 is a day of remembrance for the sacrifices made during wartime. In Canada, this refers in particular to World War I (1914-1918) and World War II (1939-1945), but also to smaller engagements in which Canadians were (or are) involved, such as Korea and Afghanistan.

The poppy has become a symbol of remembrance, and can be found pinned to people’s lapels more or less from the beginning of November each year. This tradition, which is also observed in various other nations, is derived from the poem In Flanders Fields by Lt. Col. John McCrae (1872-1918), a Canadian physician and soldier originally from Guelph, Ontario who died of pneumonia while serving in the First World War. The poem was composed shortly after the death of McCrae’s friend Lt. Alexis Helmer in the Second Battle of Ypres, and makes reference to Flanders, Belgium, where poppies grew extensively and where many military dead were buried.

In Flanders fields the poppies blow
Between the crosses, row on row,
That mark our place; and in the sky
The larks, still bravely singing, fly
Scarce heard amid the guns below.

We are the Dead. Short days ago
We lived, felt dawn, saw sunset glow,
Loved, and were loved, and now we lie
In Flanders fields.

Take up our quarrel with the foe:
To you from failing hands we throw
The torch; be yours to hold it high.
If ye break faith with us who die
We shall not sleep, though poppies grow
In Flanders fields.

— John McCrae

The poem has particular relevance in Canada, having appeared on both a stamp (1968) and currency — a portion of it is found on the current $10 bill, which honours Canadian efforts in international peacekeeping (the Nobel Peace Prize-winning idea of Lester B. Pearson, who also went on to become Prime Minister of Canada). A poppy also appeared on a quarter recently, which some may recall created a buzz as it was mistaken for spy technology by our friends south of the border.

There are several parts of Europe that I am eager to visit on the basis of pride and gratitude for what my fellow Canadians did during the two world wars. Those that I have not been to yet include Vimy Ridge and Juno Beach (Normandy), but I did manage to check one off the list two years ago during a visit to the Netherlands: Groesbeek Canadian War Cemetery.

I was in Leiden to participate in a symposium entitled “Extending the synthesis” that featured a handful of speakers including Rich Lenski, Dave Jablonski, Sergei Gavrilets, Paul Brakefield, John Thompson, Niles Eldredge, and me. On one of the days there were no formal plans, so some of the speakers took a bicycle tour of the beautiful region around Leiden, others headed off the The Hague, and I crossed much of the country by train, by bus, and on foot to visit Groesbeek. It was one of the most meaningful experiences I have ever had.

There is a scene in the movie Saving Private Ryan that never fails to break me up. Actually, there are several such scenes, but the one I have in mind at the moment involves the arrival of a military vehicle at the Ryans’ home in which their mother, realizing what this visit must mean when she sees a clergyman exit the car, collapses in grief on her front porch. This grips me with particular force as it happened to my great-grandmother — twice.

My paternal grandmother grew up in the small town of St. Marys, Ontario, which, like most towns across the country, experienced its share of sacrifice during the Second World War as approximately 10% of the country served (1.1 million out of a population of roughly 11 million). With the labour force severely diminished, my young grandmother worked in a converted hand grenade factory. Two of her older brothers, Bill and Roy, served and died in combat.

Click for larger image.

My great-uncle Bill landed in Normandy on D-Day, survived a major assault in which half his battalion was killed or wounded, received a minor wound while fighting in Belgium, and eventually was killed along with many of his friends when his battalion was shelled by German artillery. He is buried at Groesbeek Cemetery.

To reach the cemetery, one must travel by bus from the nearby town and ask the driver to stop at the road leading to the memorial.

From there, it is a fairly long walk down a forested roadway to another main road, and then another short walk up to the cemetery.

The tombstones are arranged, row on row, in order of burial. My great-uncle Bill’s is part of a long line of young men who were lost on the same day. Many of them were probably friends. All were mourned by someone.

My father had previously been the only member of our family to make the trip to see Bill’s grave, which he did many years ago. I am sure his experience was as emotional as mine was to be surrounded by so much sacrifice, and to reflect on what this must have meant for my grandmother and her family, and indeed the families of all of the individuals buried here.

Next to the large memorial at the far end of the cemetery there is a tall maple tree. A leaf from this tree hangs in a frame on the wall of my home office. It has often served as an object of reflection for me as a young man who is fortunate that his life has not been affected directly by war.

There is a guest book at the cemetery that invites visitors to leave a message. I spent quite some time leafing through it, and was deeply moved by the messages I read. “Thank you for our freedom” was among the most common. Some 60 years later, the people of the region, and the many who make a pilgrimage like mine to this site, have not forgotten the sacrifices that were made.

None of us should ever forget.

Function, non-function, some function: a brief history of junk DNA.

It is commonly suggested by anti-evolutionists that recent discoveries of function in non-coding DNA support intelligent design and refute “Darwinism”. This misrepresents both the history and the science of this issue. I would like to provide some clarification of both aspects.

When people began estimating genome sizes (amounts of DNA per genome) in the late 1940s and early 1950s, they noticed that this is largely a constant trait within organisms and species. In other words, if you look at nuclei in different tissues within an organism or in different organisms from the same species, the amount of DNA per chromosome set is constant. (There are some interesting exceptions to this, but they were not really known at the time). This observed constancy in DNA amount was taken as evidence that DNA, rather than proteins, is the substance of inheritance.

These early researchers also noted that some “less complex” organisms (e.g., salamanders) possess far more DNA in their nuclei than “more complex” ones (e.g., mammals). This rendered the issue quite complex, because on the one hand DNA was thought to be constant because it’s what genes are made of, and yet the amount of DNA (“C-value”, for “constant”) did not correspond to assumptions about how many genes an organism should have. This (apparently) self-contradictory set of findings became known as the “C-value paradox” in 1971.

This “paradox” was solved with the discovery of non-coding DNA. Because most DNA in eukaryotes does not encode a protein, there is no longer a reason to expect C-value and gene number to be related. Not surprisingly, there was speculation about what role the “extra” DNA might be playing.

In 1972, Susumu Ohno coined the term “junk DNA“. The idea did not come from throwing his hands up and saying “we don’t know what it does so let’s just assume it is useless and call it junk”. He developed the idea based on knowledge about a mechanism by which non-coding DNA accumulates: the duplication and inactivation of genes. “Junk DNA,” as formulated by Ohno, referred to what we now call pseudogenes, which are non-functional from a protein-coding standpoint by definition. Nevertheless, a long list of possible functions for non-coding DNA continued to be proposed in the scientific literature.

In 1979, Gould and Lewontin published their classic “spandrels” paper (Proc. R. Soc. Lond. B 205: 581-598) in which they railed against the apparent tendency of biologists to attribute function to every feature of organisms. In the same vein, Doolittle and Sapienza published a paper in 1980 entitled “Selfish genes, the phenotype paradigm and genome evolution” (Nature 284: 601-603). In it, they argued that there was far too much emphasis on function at the organism level in explanations for the presence of so much non-coding DNA. Instead, they argued, self-replicating sequences (transposable elements) may be there simply because they are good at being there, independent of effects (let alone functions) at the organism level. Many biologists took their point seriously and began thinking about selection at two levels, within the genome and on organismal phenotypes. Meanwhile, functions for non-coding DNA continued to be postulated by other authors.

As the tools of molecular genetics grew increasingly powerful, there was a shift toward close examinations of protein-coding genes in some circles, and something of a divide emerged between researchers interested in particular sequences and others focusing on genome size and other large-scale features. This became apparent when technological advances allowed thoughts of sequencing the entire human genome: a question asked in all seriousness was whether the project should bother with the “junk”.

Of course, there is now a much greater link between genome sequencing and genome size research. For one, you need to know how much DNA is there just to get funding. More importantly, sequence analysis is shedding light on the types of non-coding DNA responsible for the differences in genome size, and non-coding DNA is proving to be at least as interesting as the genic portions.

To summarize,

  • Since the first discussions about DNA amount there have been scientists who argued that most non-coding DNA is functional, others who focused on mechanisms that could lead to more DNA in the absence of function, and yet others who took a position somewhere in the middle. This is still the situation now.
  • Lots of mechanisms are known that can increase the amount of DNA in a genome: gene duplication and pseudogenization, duplicative transposition, replication slippage, unequal crossing-over, aneuploidy, and polyploidy. By themselves, these could lead to increases in DNA content independent of benefits for the organism, or even despite small detrimental impacts, which is why non-function is a reasonable null hypothesis.
  • Evidence currently available suggests that about 5% of the human genome is functional. The least conservative guesses put the possible total at about 20%. The human genome is mid-sized for an animal, which means that most likely a smaller percentage than this is functional in other genomes. None of the discoveries suggest that all (or even more than a minor percentage) of non-coding DNA is functional, and the corollary is that there is indirect evidence that most of it is not.
  • Identification of function is done by evolutionary biologists and genome researchers using an explicit evolutionary framework. One of the best indications of function that we have for non-coding DNA is to find parts of it conserved among species. This suggests that changes to the sequence have been selected against over long stretches of time because those regions play a significant role. Obviously you can not talk about evolutionarily conserved DNA without evolutionary change.
  • Examples of transposable elements acquiring function represent co-option. This is the same phenomenon that is involved in the evolution of complex features like eyes and flagella. In particular, co-option of TEs appears to have happened in the evolution of the vertebrate immune system. Again, this makes no sense in the absence of an evolutionary scenario.
  • Most transposable elements do not appear to be functional at the organism level. In humans, most are inactive molecular fossils. Some are active, however, and can cause all manner of diseases through their insertions. To repeat: some transposons are functional, some are clearly deleterious, and most probably remain more or less neutral.
  • Any suggestions that all non-coding DNA is functional must explain why an onion needs five times more of it than you do. So far, none of the proposed unilateral functions has done this. It therefore remains most reasonable to take a pluralistic approach in which only some non-coding elements are functional for organisms.

I realize that this will have no effect on the arguments made by anti-evolutionists, but I hope it at least clarifies the issue for readers who are interested in the actual science involved and its historical development.

Am I a MacGregor?

The name “Gregory” is used as both a first name and a surname, and I wish I had a nickel for every time someone said “No, your last name” after I told them my name was “Gregory”. Jokes about having two (actually, three) “first” names have been a staple in my life as well.

There have been 16 popes with the name “Gregory”, including Pope Gregory I (“Gregory the Great”, which, had it not been taken, would have been a nickname I would have aspired to myself; he can keep “Saint Gregory”). Think “Gregorian calendar” (Pope Gregory XIII) or “Gregorian chants” (though these are probably not actually a product of Pope Gregory I). Readers with a snarkier side may consider this blog an example of “Gregorian rants” if they so desire.

There are many derivatives of the name “Gregor”, of which “Gregory” is one. It appears to date back to the Latin “Gregorious” and the Greek “Gregorios”, meaning “alert, watchful, or vigilant”. When my father and stepmother were in Greece, they were often told that they had a “very good Greek name”. Other languages have their own versions as well.

When I was living in the west end of London (specifically, the “London Borough of Richmond-Upon-Thames“), I would have my hair cut by a fantastic old-school barber, an ex-merchant marine who lived in a long boat on the Thames and who did the final trim on one’s neck with a straight razor. On my first visit, he remarked that I “must have Scottish blood”. The reason, apparently, had to do with my thick hair and reddish goatee. “What’s your surname?” he asked. “Gregory,” I replied. “Well there you go,” he said.

You see, the other, more circuitous origin of the name “Gregory” is via the Scottish Clan MacGregor (meaning “son of Gregor”, and thus linked back to the Latin/Greek origin). It seems the MacGregors ran afoul of King James VI, who made bearing the MacGregor name a capital offence in 1603. You may be familiar with subsequent adventure involving the “Scottish Robin Hood”, Rob Roy MacGregor, as portrayed on screen by Liam Neeson (who is not a Scottish folk hero at all, but a Northern Irish Jedi).

When given the choice between changing their names or being executed, most MacGregors opted for the former. The resulting names, which numbered more than 100 and of which Gregory was one of the more obvious, became septs of Clan MacGregor. The ban on the name MacGregor was lifted in 1774, but the division into different septs remains.

Today, my fellow DNA Network member Blaine Bettinger of The Genetic Genealogist reports on an effort by the Clan Gregor Society to use DNA to reunite the Clan MacGregor.

The idea of the MacGregor DNA Project is to draw comparisons to a genetic profile from a known descendant of the chief’s line (known only as “kit 2124”). Anyone who shares 31 out of the 37 DNA markers with this individual will be given full membership in the Clan Gregor Society, regardless of current surname. Gregory is one of a few surnames focused on explicitly as part of the project.

The project primarily is making use of Y-chromosome loci, which would mean that only descendants related through their father’s side would register. It appears that some mitochondrial DNA analysis is also being conducted, which would identify individuals related through descent on their mother’s side.

As per the old tradition in our society, I received my surname from my father and, as per the old tradition in biology, I also received my Y chromosome from him. In other words, it would be perfectly feasible for me to take the test and see if my red beard is homologous to that of Rob Roy.

But really, what’s the point? I am not Scottish, I am Canadian, and I am perfectly happy with that identity. Moreover, like a great many North Americans, I represent a mixture of many different families: Gregory, Davis, Sager, MacKenzie, and who knows what else (though I confess that the ingredients here are pretty limited in their variety, coming as they all do from the British Isles). It’s really only because of a quirk of our culture that I associate almost exclusively with Gregory.

Still, it would be pretty cool to wear an official clan tartan…

Darwin’s death.

Today, April 19th, is the anniversary of Charles Darwin‘s death in 1882. I refer you to an excellent post by PZ Myers on Pharyngula about the details of Darwin’s passing [The Death of Darwin].

Darwin is buried at Westminster Abbey in London, within a few yards of Sir Isaac Newton. There is a bronze bust of Darwin as part of a memorial to several scholars near the grave that was installed by his family in 1888. The grave itself is very understated, a simple marble slab in the floor marking his name and the dates of his birth and death.

There is also a memorial to Darwin in Kent, where Down House is located, in the form of a sundial on the side of the local church.

Charles Robert Darwin, 12 February 1809 – 19 April 1882.

From "Pangenesis" to "Genome".

The term “genetics” has been used in reference to the branch of science dealing with “the physiology of heredity and variation” since 1905. It was coined by the British biologist William Bateson, first in a 1905 letter (see Bateson 1928), and then publicly the following year (Bateson 1906). It was derived directly from the Greek for “birth” (or “origins”).

Straightforward enough. But what about “gene” and “genome”? These terms are interesting because they illustrate the evolution of both concept and language in science and involve both co-option and hybridization.

First, “gene”. Even after the term “genetics” was in use, it was not entirely clear what practitioners of the science were studying. Indeed, the concept of a fundamental physical and functional unit (or “determiner”) of heredity remained very vague. In 1909, Danish biologist Wilhelm Johannsen sought to pin down a term to describe these genetic elements. Although some people attribute the origin of “gene” to the same etymology as “genetics”, there is more to the story. In actuality, “gene” was derived indirectly from Darwin‘s (incorrect) theory of heredity known as “pangenesis“. Indirectly, because it morphed through the term “pangens” coined by the Dutch botanist Hugo de Vries in 1889 in reference to genetic units and as an homage to Darwin, even though his theory of heredity differed markedly from pangenesis (de Vries was a Mendelian).

According to Johannsen (1909, p.143), he came up with the term “gene” by choosing to isolate

the last syllable ‘gene’, which alone is of interest to us, from Darwin’s well known word (Pangenesis) and thereby replace the less desirable ambiguous word ‘determiner’. Consequently, we will speak of ‘the gene’ and ‘the genes’ instead of ‘pangen’ and ‘the pangens’. The word gene is completely free from any hypothesis; it expresses only the evident fact that, in any case, many characteristics of the organism are specified in the germ cells by means of special conditions, foundations, and determiners which are present in unique, separate, and thereby independent ways – in short, precisely what we wish to call genes. [Translation as in Portugal and Cohen 1977].

Johannsen (1909) was also responsible for the terms “genotype” and “phenotype“. As he summarized in 1911,

I have proposed the terms ‘gene’ and ‘genotype’ … to be used in the science of genetics. The ‘gene’ is nothing but a very applicable little word, easily combined with others, and hence it may be useful as an expression for the ‘unit-factors’, ‘elements’ or ‘allelomorphs’ in the gametes, demonstrated by modern Mendelian researches. A ‘genotype’ is the sum total of all the ‘genes’ in a gamete or in a zygote.

So, we have an evolution of the term from “pangenesis” (Darwin) to “pangens” (de Vries) to “genes” (Johannsen), passing through an incorrect theory of heredity to a term “completely free from any hypothesis” about inheritance to Mendelian genetics.

What about “genome”?

According to the Oxford English Dictionary, the term “genom(e)” was coined by the German botanist Hans Winkler in 1920 as a portmanteau of gene and chromosome (the latter term having been coined by Wilhelm Waldeyer in 1888). This story has been repeated by many authors (including yours truly; Gregory 2001), but has been challenged by Lederberg and McCray (2001), who suggest that Winkler probably merged gene with the generalized suffix ‘ome (referring to “the entire collectivity of units”), and not ‘some (“body”) from chromosome. In either case, Winkler’s intent was to “propose the expression Genom for the haploid chromosome set, which, together with the pertinent protoplasm, specifies the material foundations of the species” (translation as in Lederberg and McCray 2001).

Based on this initial formulation, “genome” can accurately be taken to mean either the total gene complement (interchangeably with Johannsen’s “genotype”), or the total DNA amount per haploid chromosome set – but not both, as we now know that these are not correlated with one another. This latter issue remains the subject of active study, and I shall have much more to say about it in future postings.



Bateson, W. 1906. A text-book of genetics. Nature 74: 146-147.

Bateson, W. 1928. Letter to Sedgwick, April 18, 1905. In William Bateson, F.R.S.: His Essays and Addresses (ed. B. Bateson), pp. 93. Cambridge University Press, Cambridge.

De Vries, H. 1889. Intrazelluläre Pangenesis. Fischer, Jena.

Gregory, T.R. 2001. The bigger the C-value, the larger the cell: genome size and red blood cell size in vertebrates. Blood Cells, Molecules, and Diseases 27: 830-843.

Johannsen, W. 1909. Elemente der Exakten Erblichkeitslehre. Fischer, Jena.

Johannsen, W. 1911. The genotype conception of heredity. American Naturalist 45: 129-159.

Lederberg, J. and A.T. McCray. 2001. ‘Ome sweet ‘omics — a genealogical treasury of words. The Scientist 15: 8.

Portugal, F.H. and J.S. Cohen. 1977. A Century of DNA. MIT Press, Cambridge, MA.

Winkler, H. 1920. Verbeitung und Ursache der Parthenogenesis im Pflanzen und Tierreiche. Verlag Fischer, Jena.

The discovery of DNA.

The following is an adapted excerpt from The Evolution of the Genome, © 2005 Elsevier Academic Press.

In the mid- to late 1800s (and to an extent, well into the 20th century), proteins were considered the most significant components of cells. Their very name reflects this fact, being derived from the Greek proteios, meaning “of the first importance”. In 1869, while developing techniques to isolate nuclei from white blood cells (which he obtained from pus-filled bandages, a plentiful source of cellular material in the days before antiseptic surgical techniques), 25 year-old Swiss biologist Friedrich Miescher stumbled across a phosphorous-rich substance which, he stated, “cannot belong among any of the protein substances known hitherto” (quoted in Portugal and Cohen 1977 [1]). To this substance he gave the name nuclein, and published his results in 1871 after confirmation of the remarkable finding by his advisor, Felix Hoppe-Seyler (for reviews, see Mirsky 1968; Portugal and Cohen 1977; Lagerkvist 1998; Wolf 2003) [2, 3].

Miescher continued his work on nuclein for many years, in part refuting claims that it was merely a mixture of inorganic phosphate salts and proteins. Yet Miescher never departed from the common proteinocentric wisdom, and instead suggested that the nuclein molecule served as little more than a storehouse of cellular phosphorus. In 1879, Walther Flemming coined the term chromatin (Gr. “colour”) in reference to the coloured components of cell nuclei observed after treatment with various chemical stains, and in 1888 Wilhelm Waldeyer used the term chromosome (Gr. “colour body”) to describe the threads of stainable material found within the nucleus. For some time, debate existed over whether or not chromatin and nuclein were one and the same. The argument was largely settled when Richard Altman obtained protein-free samples of nuclein in 1889. As part of this work, Altman proposed a more appropriate (and familiar) term for the substance, nucleic acid. Over time, the components of the nucleic acid molecules were deduced, and by the 1930s, nuclein had become desoxyribose nucleic acid, and later, deoxyribonucleic acid (DNA).

The important developments that took place over the ensuing decades are well documented (e.g., Portugal and Cohen 1977; Judson 1996), including early hypotheses of DNA’s structure (such as Phoebus Levene’s failed tetranucleotide hypothesis, or the incorrect helical model of Linus Pauling), Erwin Chargaff’s discovery of the constant ratio of the two purines with their respective pyrimidines, Rosalind Franklin’s x-ray crystallography of the DNA molecule, and other key developments leading up to Watson and Crick’s monumental synthesis in 1953 and the subsequent deciphering of the genetic code.

Miescher died of tuberculosis in 1895 at the age of 51. His was a major contribution to biology, as were the discoveries of countless other individuals up to and beyond the elucidation of DNA’s physical structure and the dawn of molecular genetics.



[1] I stumbled across this book at a used bookstore in Madison, Wisconsin at the 1999 SSE meeting. That was in the days before searches on Amazon.com, Google, and Wikipedia were easy and routine, and I was unaware that the book existed so I considered it quite a lucky find.

[2] Hoppe-Seyler also had his own journal, in which Miescher’s results were published, but was not a co-author on the paper. My, how things have changed!

[3] For more information about Miescher, see the following:


Judson, H.F. 1996. The Eighth Day of Creation. CSHL Press, Plainview, NY.

Lagerkvist, U. 1998. DNA Pioneers and Their Legacy. Yale University Press, New Haven, CT.

Miescher, F. 1871. Ãœber die chemische Zusammensetzung der Eiterzellen. Hoppe-Seyler’s medizinish-chemischen Untersuchungen 4: 441-460.

Mirsky, A.E. 1968. The discovery of DNA. Scientific American 218 (June): 78-88.

Portugal, F.H. and J.S. Cohen. 1977. A Century of DNA. MIT Press, Cambridge, MA.

Tracy, K. 2005. Friedrich Miescher and the Story of Nuclei Acid. Mitchell Lane Publishers.

Wolf, G. 2003. Friedrich Miescher, the man who discovered DNA.