I am a biologist and author of more than 80 scientific papers and 10 books, including Science Set Free, called The Science Delusion in the UK. My website sheldrake.org has links to many articles, audio and video recordings. You can email me

Epigenetics and Soviet Biology

One of the biggest controversies in twentieth-century biology was about the inheritance of acquired characteristics, the ability of animals and plants to inherit adaptations acquired by their ancestors. For example, if a dog was terrified of butchers because he had been mistreated by one, his offspring would tend to inherit his fear. Charles Darwin wrote a letter to Nature describing just such a case. The opposing view, promoted by the science of genetics, asserted that organisms could not inherit features their ancestors had acquired; they only passed on genes that they themselves had inherited.

In Darwin’s day, most people assumed that acquired characteristics could indeed be inherited. Jean-Baptiste Lamarck took this for granted in his theory of evolution published more than 50 years before Darwin’s, and the inheritance of acquired characters is often referred to as “Lamarckian inheritance.” Darwin shared Lamarck’s assumption and cited many examples to support it in his book The Variation of Animals and Plants Under Domestication (1875).

Lamarck emphasized the role of behaviour in evolution. Animals developed new habits in response to needs, which led to the use or disuse of organs, which were accordingly either strengthened or weakened. Over generations, these changes became increasingly hereditary. Lamarck’s most famous example was the giraffe. He thought giraffes’ long necks were acquired through the habit of stretching up to eat the leaves of trees. In this respect too, Darwin agreed with Lamarck. For example ostriches, he suggested, may have lost the power of flight through disuse and gained stronger legs through increased use over successive generations.

The problem was that no one knew how acquired characteristics could be inherited. Darwin tried to explain it with his hypothesis of “pangenesis”. He proposed that all the units of the body threw off tiny “gemmules” of “formative matter”, which were dispersed throughout the body and aggregated in the buds of plants and in the germ cells of animals, through which they were transmitted to the offspring. This “Provisional Hypothesis of Pangenesis” appeared in the penultimate chapter of The Variation of Animals and Plants Under Domestication. Several modern theories of epigenetics are similar, but instead of gemmules they propose protein or RNA molecules.

Pangenesis was rejected by Mendelian genetics, the theory that dominated twentieth century biology in the West. Heredity was genetic, not Lamarckian or Darwinian. The neo-Darwinian theory of evolution differed from the Darwinian theory by rejecting the inheritance of acquired characteristics. Neo-Darwinism became the ruling orthodoxy in the West from the 1930s onwards. Lamarckian inheritance was treated as heresy.

Meanwhile, in the Soviet Union the inheritance of acquired characteristics was the orthodox doctrine from the 1930s to the 1960s. Under the leadership of Trofim D. Lysenko, much Soviet research on inheritance supported the inheritance of acquired characters. Stalin favoured Lysenko, and geneticists were persecuted. This Stalinist approach increased the opposition to the inheritance of acquired characteristics in the West. The nature of inheritance became intensely politicized. Ideology, rather than scientific evidence, dominated the dispute.

The Western taboo against the inheritance of acquired characteristics began to dissolve around the turn of the millennium. There is a rapidly growing body of evidence that acquired characters can indeed be inherited. This kind of inheritance is now called “epigenetic inheritance.” In this context, the word epigenetic signifies “over and above genetic.” Some kinds of epigenetic inheritance depend on chemical attachments to genes, particularly of methyl groups. Genes can be “switched off” by the methylation of the DNA itself or of the proteins that bind to it.

This is a fast-growing field of research, and there are now many examples of epigenetic inheritance in plants and animals. For example, in a recent study with mice, the fears of the fathers were passed on to their children and grandchildren. Male mice were made averse to the smell of a synthetic chemical, acetophenone, by being given mild electric shocks when they smelled it. For at least two generations, their offspring reacted with fear to this smell, even though they had never been exposed to it before.

In the mid-twentieth century, Lysenko and other Soviet biologists were demonized in the West for affirming an inheritance of acquired characteristics in animals and plants. Western biologists assumed that this Soviet research must be fraudulent. But in the light of epigenetics, can we be sure that almost all the papers on inheritance published in the USSR were wrong? Were all Soviet scientists totally brainwashed? Or were some of them sincerely reporting what they found? Among the many thousands of papers in Soviet biology journals, there may be seams of gold. No doubt these journals are still available in scientific libraries. If Russian-speaking biologists reviewed this literature they might unearth great treasures. Epigenetics and Soviet Biology

Wikipedia Under Threat

Wikipedia is a wonderful invention. But precisely because it’s so trusted and convenient, people with their own agendas keep trying to take it over. Editing wars are common. According to researchers at Oxford University, the most controversial subjects worldwide include Israel and God.

This is not surprising. Everyone knows that there are opposing views on politics and religion, and many people recognise a biased account when they see it. But in the realm of science, things are different. Most people have no scientific expertise and believe that science is objective. Their trust is now being abused systematically by a highly motivated group of activists called Guerrilla Skepticism on Wikipedia.

Scepticism is a normal, healthy attitude of doubt. Unfortunately it can also be used as a weapon to attack opponents. In scientific and medical contexts, organized skepticism is a crusade to propagate scientific materialism. (In Britain, skeptical organizations use the American spelling, with a k.) Most materialists believe that the mind is nothing more than the physical activity of the brain, psychic phenomena are illusory, and complementary and alternative medical systems are fraudulent, or at best produce placebo effects. Most materialists are also atheists: if science can, in principle, explain everything, there is no need for God. Belief in God is a hangover from a pre-scientific age. God is nothing but an idea in human minds and hence in human brains. Several advocacy organizations promote this materialist ideology in the media and in educational institutions. The largest and best funded is the Committee for Skeptical Inquiry (CSI), which publishes The Skeptical Inquirer magazine. The Guerrilla Skeptics have carried the crusading zeal of organized skepticism into the realm of Wikipedia, and use it as a soapbox to propagate their beliefs.

There is a conflict at the heart of science between the spirit of free enquiry and the materialist worldview. I gave a talk this subject at a TEDx event in London earlier this year, in which I discussed the ten dogmas of modern science. I showed that by turning the dogmas into questions they can be examined critically in the light of the findings of science itself. For example, the assumption that the total amount of matter and energy is always the same becomes “Is the total amount of matter and energy always the same?” Most physicists now think that the universe contains vast amount of dark matter and dark energy, whose nature is literally obscure, constituting 96 percent of the universe. Regular matter and energy are only about 4 percent of reality. Is the total amount of dark matter always the same? No one knows. Some physicists think that the total amount of dark energy increases as the universe expands. Proponents of a hypothetical form of dark energy called quintessence specifically suggest that it produces different amounts of energy over time.

My talk was removed from the TEDx web site after furious protests from militant skeptics, who accused me of propagating pseudoscience. This sparked off a controversy that went viral on the internet, documented here. Most participants in online discussions were very disappointed that TED had been frightened into submission, and TED themselves retracted the accusations against me.

This summer, soon after the TED controversy, a commando squad of skeptics captured the Wikipedia page about me. They have occupied and controlled it ever since, rewriting my biography with as much negative bias as possible, to the point of defamation. At the beginning of the “Talk” page, on which editorial changes are discussed, they have posted a warning to editors who do not share their biases: “A common objection made by new arrivals is that the article presents Sheldrake’s work in an unsympathetic light and that criticism of it is too extensive or violates Wikipedia’s Neutral Point of View policy.” Several new arrivals have indeed attempted to restore a more balanced picture, but have had a bewildering variety of rules thrown at them, and warned that they will be banned if they persist in opposing the skeptics. Craig Weiler gives some telling examples in his newly posted blog called “The Wikipedia battle for Rupert Sheldrake’s biography”. Fortunately, a few editors arguing for a more neutral point of view have not yet been bullied into silence. An editing war is raging as you read this.

The Guerrilla Skeptics are well trained, highly motivated, have an ideological agenda, and operate in teams, contrary to Wikipedia rules. The mastermind behind this organization is Susan Gerbik. She explains how her teams work in a training video. She now has over 90 guerrillas operating in 17 different languages. The teams are coordinated through secret Facebook pages. They check the credentials of new recruits to avoid infiltration. Their aim is to “control information”, and Ms Gerbik glories in the power that she and her warriors wield. They have already seized control of many Wikipedia pages, deleted entries on subjects they disapprove of, and boosted the biographies of atheists.

As the Guerrilla Skeptics have demonstrated, Wikipedia can easily be subverted by determined groups of activists, despite its well-intentioned policies and mediation procedures. Perhaps one solution would be for experienced editors to visit the talk pages of sites where editing wars are taking place, rather like UN Peacekeeping Forces, and try to re-establish a neutral point of view. But this would not help in cases where there are no editors to oppose the Guerrilla Skeptics, or where they have been silenced.

If nothing is done, Wikipedia will lose its credibility, and its financial backers will withdraw their support. I hope the noble aims of Wikipedia will prevail.

Thinking of someone and then meeting unexpectedly

In the course of my research on the unexplained human abilities, more than 150 people have told me about an experience that I had never before seen discussed.  To their surprise, they thought about a friend or acquaintance for no particular reason, and then shortly afterward met that person. No one thinks it strange if he meets someone he was expecting to meet, or someone he encounters frequently. It is with unexpected meetings that the phenomenon is so striking. For example, Andreas Thomopoulos, a film director from Athens, was visiting Paris with his wife. “Walking through the streets, we thought of a close student friend of mine in London. We wondered how he was nowadays since I hadn’t seen him for over twenty years. Shortly after, on going around a corner, we bumped straight into him!” Mary Flanagan, of Hoboken, New Jersey, had a similar experience: “Walking down the street, I was thinking of someone I had not seen or spoken to for three years and who lives in a different city. I met her on the street about ten minutes after I started thinking about her.”

Anticipations of meetings even seem to occur with vehicles, rather than specific people. David Campbell had a job during the school holidays working on a construction project in County Durham, in the north of England. “We traveled to the site in the company’s van, and for no good reason I memorized the registration number of the van, I can still remember it. Anyway, the job finished and I went back to school. A couple of years later I was out with the local cycling club one Sunday morning when for some inexplicable reason I started thinking about this builder’s van and its number plate. About half a minute later the van passed me going in the opposite direction!”

Some people also anticipate encounters with animals. Some hunters and wildlife photographers seem to anticipate meetings with animals they are trying to hunt or to photograph. Some anglers have had similar experiences. Paul Hicks, for example, used to be an avid angler and would sometimes camp out by the water’s edge for days on end. “There were instances I knew for a fact that within a minute or two I was going to catch a fish. It was uncanny when that happened. It wasn’t just because the weather was good, or the time of day was right or whatever, it was just a knowledge that something was going to happen.”

Are all these cases just coincidence and selective memory? Perhaps. But perhaps there is more to them, and only further research will be able to settle this question. For a start, people who have such anticipations quite frequently could make a note of them, and then see how many were followed by actual meetings. A statistical analysis should be able to reveal whether their anticipations could in fact be explained by the coincidence hypothesis.

There is a superficial similarity between anticipating meetings and anticipating telephone calls. But in fact the two situations are very different. In the case of telephone calls, one person thinks about the other and forms an intention to call. This intention is directed toward the other person, creating appropriate conditions for telepathy. By contrast, in the case of unexpected meetings, the person thought about is not usually intending to meet the other person, or thinking about him or her. The anticipation of meetings therefore seems more precognitive than
telepathic.

In addition, the anticipation of phone calls usually happens with people to whom a person is closely bonded, favoring the telepathic explanation. By contrast, the anticipation of meetings happens with mere acquaintances, or even with vehicles, or with wild animals.

If you have had an experience like these, please let me know at sheldrake@sheldrake.org

I discuss these and other unexplained human abilities in the new edition of my book The Sense of Being Stared At, which has just been published in the US.  

The new edition of The Sense of Being Stared At, published in the US in June 2013

The Spiritual Uses of Science

A podcast with Mark Vernon. Can science supply useful metaphors, or are terms like ‘quantum” and “neuro” overused and inappropriate? In some areas, research findings have a real contribution to make in discussing the power of belief, especially in relation to the placebo response. 

The Active Voice

The simplest and cheapest of all reforms within institutional science is to switch from the passive to the active voice in writing about science.  Many people have already made this change, but some teachers in schools and universities do not realise that they and their students are free to write more naturally.

The idealized objectivity of science is reflected in the use of the passive voice in many science reports: “A test tube was taken…” instead of “I took a test tube.” All research scientists know that writing in the passive voice is artificial; they are not disembodied observers, but people doing research. Technocrats also use the passive voice to give their reports an air of scientific authority, dressing up opinions as objective facts.

The passive style did not become fashionable in science until the end of the nineteenth century. Earlier scientists like Isaac Newton, Michael Faraday and Charles Darwin used the active voice. The passive was introduced to make science seem more objective, impersonal and professional. Its heyday in the scientific literature was from 1920 to 1970. But times are changing. Many scientists abandoned this convention in the 1970s and 1980s.

In 1999, I was astonished to read in my 11-year-old son’s science notebook, “The test tube was heated and carefully smelt.” At primary school his science reports had been lively and vivid, but when he moved to secondary school they became stilted and artificial. His teachers told him to write that way, and gave him a style sheet to copy. 

I thought that schools has abandoned this practice years ago, and was curious to find out how widespread it still was. In 2000, I carried out a survey of 172 secondary schools in Britain to find out how many insisted on the passive style. 

Overall, 42 per cent of the schools still promoted the passive voice, 45 per cent the active, and 13 per cent had no preference.

Most of the teachers enforcing the use of passive voice said they were simply following convention.  No one was enthusiastic about it. They taught it out of a sense of duty because they believed that leading scientists and journals required it. Some thought that examination boards insisted on it, but this was not true. I found that all the UK examination boards accepted reports in the active or the passive voice. 

I also found that most scientific journals accepted papers in the active voice; some, including Nature, positively encouraged it. I surveyed 55 journals in the physical and biological sciences, and found only two that required passive constructions.

When Lord May was President of the Royal Society, he read the results of my survey of school science teaching, he was “horrified” that so many favoured the passive: “I would put my own view so strongly as to say that, these days, the use of the passive voice in a research paper is the hallmark of second-rate work,” he said. “In the long run, more authority is conferred by the direct approach than by the pedantic pretence that some impersonal force is performing the research.”  May’s views were shared by many other eminent scientists, including the Astronomer Royal, Martin Rees, who succeeded Lord May as President of the Royal Society, and Bruce Alberts, then President of the US National Academy of Sciences.

Nevertheless, old habits die hard, and science teachers in many schools still insist that their pupils write in the passive voice.  In a recent survey I carried out, science teachers in 30 percent of British secondary schools were still insisting on the passive voice. This is an outdated practice. “Primary and secondary teachers should, without any reservation, be encouraging all their students to be writing in the active voice,” said Lord May.

Switching from the passive to the active voice in science reports is a simple reform that costs nothing and makes science writing more truthful and more readable.

Rat learning and morphic resonance

This is extracted from Chapter 11 of Rupert Sheldrake’s book Morphic Resonance (in the US) and A New Science of Life (in the UK)

In mechanistic biology, a sharp distinction is drawn between innate and learned behaviour: the former is assumed to be ‘genetically programmed’ or ‘coded’ in the DNA, while the latter is supposed to result from physical and chemical changes in the nervous system. There is no conceivable way in which such changes could specifically modify the DNA, as the Lamarckian theory would require; it is therefore considered impossible for learned behaviour acquired by an animal to be inherited by its offspring (excluding, of course, ‘cultural inheritance’, whereby the offspring learn patterns of behaviour from their parents or other adults).

By contrast, according to the hypothesis of formative causation, there is no difference in kind between innate and learned behaviour: both depend on motor fields given by morphic resonance (Section 10. 1). This hypothesis therefore admits a possible transmission of learned behaviour from one animal to another, and leads to testable predictions which differ not only from those of the orthodox theory of inheritance, but also from those of the Lamarckian theory, and from inheritance through epigenetic modifications of gene expression.

Consider the following experiment. Animals of an inbred strain are placed under conditions in which they learn to respond to a given stimulus in a characteristic way. They are then made to repeat this pattern of behaviour many times. Ex hypothesi, the new behavioural field will be reinforced by morphic resonance, which will not only cause the behaviour of the trained animals to become increasingly habitual, but will also affect, although less specifically, any similar animal exposed to a similar stimulus: the larger the number of animals in the past that have learned the task, the easier it should be for subsequent similar animals to learn it. Therefore in an experiment of this type it should be possible to observe a progressive increase in the rate of learning not only in animals descended from trained ancestors, but also in genetically similar animals descended from untrained ancestors. This prediction differs from that of the Lamarckian theory, according to which only the descendants of trained animals should learn quicker. And on the conventional theory, there should be no increase in the rate of learning of the descendants of untrained or trained animals.

To summarize: an increased rate of learning in successive generations of both trained and untrained lines would support the hypothesis of formative causation; an increase only in trained lines, the Lamarckian theory; and an increase in neither, the orthodox theory.

Tests of this type have in fact already been performed. The results support the hypothesis of formative causation.

         The original experiment was started by William McDougall at Harvard in 1920, in the hope of providing a thorough test of the possibility of Lamarckian inheritance. The experimental animals were white rats, of the Wistar strain, that had been carefully inbred under laboratory conditions for many generations. Their task was to learn to escape from a specially constructed tank of water by swimming to one of two gangways that led out of the water. The ‘wrong’ gangway was brightly illuminated, while the ‘right’ gangway was not. If the rat left by the illuminated gangway it received an electric shock. The two gangways were illuminated alternately, one on one occasion, the other on the next. The number of errors made by a rat before it learned to leave the tank by the non-illuminated gangway gave a measure of its rate of learning:

Some of the rats required as many as 330 immersions, involving approximately half that number of shocks, before they learnt to avoid the bright gangway. The process of learning was in all cases one which suddenly reached a critical point. For a long time the animal would show clear evidence of aversion for the bright gangway, frequently hesitating before it, turning back from it, or taking it with a desperate rush; but, not having grasped the simple relation of constant correlation between bright light and shock, he would continue to take the bright route as often or nearly as often as the other. Then, at last, would come a point in his training at which he would, if he found himself facing the bright light, definitely and decisively turn about, seek the other passage, and quietly climb out by the dim gangway. After attaining this point, no animal made the error of again taking the bright gangway, or only in very rare instances.[i]

In each generation, the rats from which the next generation were to be bred were selected at random before their rate of learning was measured, although mating took place only after they were tested. This procedure was adopted to avoid any possibility of conscious or unconscious selection in favour of quicker-learning rats.

This experiment was continued for 32 generations and took 15 years to complete. In accordance with the Lamarckian theory, there was a marked tendency for rats in successive generations to learn more quickly. This is indicated by the average number of errors made by rats in the first eight generations, which was over 56, compared with 41, 29 and 20 in the second, third and fourth groups of eight generations, respectively.[ii] The difference was apparent not only in the quantitative results, but also in the actual behaviour of the rats, which became more cautious and tentative in the later generations.[iii]

McDougall anticipated the criticism that in spite of his random selection of parents in each generation, some sort of selection in favour of quicker-learning rats could nevertheless have crept in. In order to test this possibility, he started a new experiment, with a different batch of rats, in which parents were indeed selected on the basis of their learning score. In one series, only quick learners were bred from in each generation, and in the other series only slow learners. As expected, the progeny of the quick learners tended to learn relatively quickly, while the progeny of the slow learners learned relatively slowly. However, even in the latter series, the performance of the later generations improved very markedly, in spite of repeated selection in favour of slow learning (Fig. 29)

 

 


 

Figure 29  The average number of errors in successive generations of rats selected in each generation of slowness of learning.  (Data from McDougall, 1938).

 

 

These experiments were done carefully, and critics were unable to dismiss the results on the ground of flaws in technique. But they did draw attention to a weakness in the experimental design: McDougall had failed to test systematically the change in the rate of learning of rats whose parents had not been trained.

One of these critics, F.A.E. Crew, of Edinburgh University, repeated McDougall’s experiment with rats derived from the same inbred strain, using a tank of similar design. He included a parallel line of ‘untrained’ rats, some of which were tested in each generation for their rate of learning, while others, which were not tested, served as the parents of the next. Over the 18 generations of this experiment, Crew found no systematic change in the rate of learning either in the trained or in the untrained line.[iv] At first, this seemed to cast serious doubt on McDougall’s findings. However, Crew’s results were not directly comparable in three important respects. First, the rats found it much easier to learn the task in his experiment than in the earlier generations of McDougall’s. So pronounced was this effect that a considerable number of rats in both trained and untrained lines ‘learned’ the task immediately without receiving a single shock! The average scores of Crew’s rats right from the beginning were similar to those of McDougall’s after more than 30 generations of training. Neither Crew nor McDougall was able to provide a satisfactory explanation of this discrepancy. But, as McDougall pointed out, since the purpose of the investigation was to bring to light any effect of training on subsequent generations, an experiment in which some rats received no training at all and many others received very little would not be qualified to demonstrate this effect.[v] Second, Crew’s results showed large and apparently random fluctuations from generation to generation, far larger than the fluctuations in McDougall’s results, which could well have obscured any tendency to improve in the scores of later generations. Third, Crew adopted a policy of very intensive inbreeding, crossing only brothers with their sisters in each generation. He had not expected this to have adverse effects, since the rats came from an inbred stock to start with:

Yet the history of my stock reads like an experiment in inbreeding. There is a broad base of family lines and a narrow apex of two remaining lines. The reproductive rate falls and line after line becomes extinct. [vi]

Even in the surviving lines, a considerable number of animals were born with such extreme abnormalities that they had to be discarded. The harmful effects of this severe inbreeding could well have masked any tendency for the rate of learning to improve. Altogether, these defects in Crew’s experiment mean that the results can only be regarded as inconclusive; and in fact he himself was of the opinion that the question remained open.[vii]

Fortunately, this was not the end of the story. W. E. Agar and his colleagues at Melbourne University carried out the experiment again, using methods that did not suffer from the disadvantages of Crew’s. Over a period of 20 years, they measured the rates of learning of trained and untrained lines for 50 successive generations. In agreement with McDougall, they found that there was a marked tendency for rats of the trained line to learn more quickly in subsequent generations. But exactly the same tendency was also found in the untrained line. [viii]

It might be wondered why McDougall did not also observe a similar effect in his own untrained lines. The answer is that he did. Although he tested control rats from the original untrained stock only occasionally, he noticed ‘the disturbing fact that the groups of controls derived from this stock in the years 1926, 1927, 1930 and1932 show a diminution in the average number of errors from 1927 to 1932’. He thought this result was probably fortuitous, but added:

It is just possible that the falling off in the average number of errors from 1927 to 1932 represents a real change of constitution of the whole stock, an improvement of it (with respect to this particular faculty) whose nature I am unable to suggest. [ix]

 

With the publication of the final report by Agar’s group in 1954 the prolonged controversy over ‘McDougall’s Lamarckian Experiment’ came to an end. The similar improvement in both trained and untrained lines ruled out a Lamarckian interpretation. McDougall’s conclusion was refuted, and that seemed to be the end of the matter. On the other hand, his results were confirmed.

These results seemed completely inexplicable; they made no sense in terms of any current ideas, and they were never followed up. But they make very good sense in the light of the hypothesis of formative causation. Of course they cannot in themselves prove the hypothesis; it is always possible to suggest other explanations, for example that the successive generations of rats became increasingly intelligent for an unknown reason unconnected with their training.[x]

In future experiments, the most unambiguous way of testing for the effects of morphic resonance would be to cause large numbers of rats (or any other animals) to learn a new task in one location; and then see if there was an increase in the rate at which similar rats learned to carry out the same task at another location hundreds of miles away. The initial rate of learning at both locations should be more or less the same. Then, according to the hypothesis of formative causation, the rate of learning should increase progressively at the location when large numbers are trained; and a similar increase should also be detectable in the rats at the second location, even though very few rats had been trained there. Obviously, precautions would need to be taken to avoid any possible conscious or unconscious bias on the part of the experimenters. One way would be for experimenters at the second location to test the rate of learning of rats in several different tasks, at regular intervals, say monthly. Then at the first location, the particular task in which thousands of rats would be trained would be chosen at random from this set. Moreover, the time at which the training began would also be selected at random; it might, for example, be four months after the regular tests began at the second location. The experimenters at the second location would not be told either which task had been selected, or when the training had begun at the first location. If, under these conditions, a marked increase in the rate of learning in the selected task were detected at the second location after the training had begun at the first, then this result would provide strong evidence in favour of the hypothesis of formative causation.

An effect of this type might well have occurred when Crew and Agar’s group repeated McDougall’s work. In both cases, their rats started off learning the task considerably quicker than McDougall’s when he first began his experiment. [xi]

If the experiment proposed above were actually performed, and if it gave positive results, it would not be fully reproducible by its very nature: in attempts to repeat it, the rats would be influenced by morphic resonance from the rats in the original experiment. To demonstrate the same effect again and again, it would be necessary to change either the task or the species used in each experiment.

 

References

 

AGAR, W.E., DRUMMOND, F.H., AND TIEGS, O.W. (1942) Second report on a test of McDougall’s Lamarckian experiment on the training of rats.  Journal of Experimental Biology 19, 158-67.

AGAR, W.E., DRUMMOND, F.H., TIEGS, O.W., and GUNSON, M.M. (1954) Fourth (final) report on a test of McDougall’s Lamarckian experiment on the training of rats.  Journal of Experimental Biology 31, 307-21.

CREW, F.A.E. (1936) A repetition of McDougall’s Lamarckian experiment.  Journal of Genetics 33, 61-101.

McDOUGALL, W. (1927) An experiment for the testing of the hypothesis of Larmarck. British Journal of Psychology 17, 267-304.

McDOUGALL, W. (1930) Second report on a Lamarckian experiment. British Journal of Psychology 20, 201-18.

McDOUGALL, W. (1938) Fourth report on a Lamarckian experiment.  British Journal of Psychology 28, 321-45.

 

 

 

 

 



[i] McDougall (1927), p. 282.

[ii] McDougall (1938).

[iii] McDougall (1930).

[iv] Crew (1936).

[v] McDougall (1938).

[vi] Crew (1936), p. 75.

[vii] Tinbergen (1951), p. 201.

[viii] Agar, Drummond, Tiegs and Gunson (1954).

[ix] Rhine and McDougall (1933), p. 223.

[x] A number of possible explanations were suggested at the time these experiments were being carried out; they are discussed in McDougall’s papers, to which the interested reader should refer. None of these explanations turned out to be plausible on closer examination. Agar et al. (1954) noticed that fluctuations in the rates of learning were associated with changes, extending over several generations, in the health and vigour of the rats. McDougall had already noted a similar effect. A statistical analysis showed that there was indeed a low but significant (at the 1% level of probability) correlation between vigour (measured in terms of fertility) and learning rates in the ‘trained’ line, but not in the ‘untrained’ line. However, if only the first forty generations were considered, the coefficients of correlation were somewhat higher: 0.40 in the ‘trained’ line, and 0.42 in the ‘untrained’. But while this correlation may help to account for the fluctuations in the results, it cannot plausibly explain the overall trend. According to standard statistical theory, the proportion of the variation ‘explained’ by a correlated variable is given by the square of the correlation coefficient, in this case (0.4)2 = 0.16. In other words, variations in vigour account for only 16% of the changes in the rate of learning.

[xi] McDougall estimated that the average number of errors in his first generation was over 165. In Crew’s experiment this figure was 24, and in Agar’s, 72; see the discussions in Crew (1936), and in Agar et al. (1942). If Agar’s group had used rats of identical parentage and followed the same procedures as Crew, their initial score might have been expected to be even lower than his. However, owing to the different parentage of their rats, and to differences in their testing procedure, the results are not fully comparable. Nevertheless the greater facility of learning in these later experiments is suggestive.

Is Materialism Inherently Atheistic?

This Science Set Free podcast is the last in a series of three discussions with Mark Vernon, author of How To Be An Agnostic

Can Materialists Have Free Choice?

This Science Set Free podcast is the second in a series of three discussions with Mark Vernon, author of How To Be An Agnostic

How the Universal Gravitational Constant Varies

Physics is based on the assumption that certain fundamental features of nature are constant. Some constants are considered to be more fundamental than others, including the velocity of light c and the Universal Gravitational Constant, known to physicists as Big G. Unlike the constants of mathematics, such as p, the values of the constants of nature cannot be calculated from first principles: they depend on laboratory measurements.  As the name implies, the physical constants are supposed to be changeless. They are believed to reflect an underlying constancy of nature, part of the standard assumption of physics that the laws and constants of nature are fixed forever. 

 Are the constants really constant?  The measured values continually change, as I show in my book Science Set Free (The Science Delusion in the UK). They are regularly adjusted by international committees of experts know as metrologists. Old values are replaced by new “best values”, based on the recent data from laboratories around the world.

 Within their laboratories, metrologists strive for ever-greater precision. In so doing, they reject unexpected data on the grounds they must be errors.  Then, after deviant measurements have been weeded out, they average the values obtained at different times, and subject the final value to a series of corrections. Finally, in arriving at the latest “best values”, international committees of experts then select, adjust and average the data from an international selection of laboratories.

 Despite these variations, most scientists take it for granted that the constants themselves are really constant; the variations in their values are simply the result of experimental errors.

 The oldest of the constants, Newton’s Universal Gravitational Constant, known to physicists as Big G, shows the largest variations. As methods of measurement became more precise, the disparity in measurements of G by different laboratories increased, rather than decreased.

 Between 1973 and 2010, the lowest average value of G was 6.6659, and the highest 6.734, a 1.1 percent difference. These published values are given to at least 3 places of decimals, and sometimes to 5, with estimated errors of a few parts per million. Either this appearance of precision is illusory, or G really does change. The difference between recent high and low values is more than 40 times greater than the estimated errors (expressed as standard deviations).

 What if G really does change? Maybe its measured value is affected by changes in the earth’s astronomical environment, as the earth moves around the sun and as the solar system moves within the galaxy.  Or maybe there are inherent fluctuations in G.  Such changes would never be noticed as long as measurements are averaged over time and averaged across laboratories.

 In 1998, the US National Institute of Standards and Technology published values of G taken on different days, revealing a remarkable range. On one day the value was 6.73, a few months later it was 6.64, 1.3% lower. (The references for all the data cited in this blog are given in Science Set Free/The Science Delusion).

 In 2002, a team lead by Mikhail Gershteyn, of the Massachusetts Institute of Technology, published the first systematic attempt to study changes in G at different times of day and night. G was measured around the clock for seven months, using two independent methods. They found a clear daily rhythm, with maximum values of G 23.93 hours apart, correlating with the length of the sidereal day, the period of the earth’s rotation in relation to the stars.

 Gershteyn’s team looked only for daily fluctuations, but G may well vary over longer time periods as well; there is already some evidence of an annual variation.

 By comparing measurements from different locations, it should be possible to find more evidence of underlying patterns. Such measurements already exist, buried in the files of metrological laboratories. The simplest and cheapest starting point for this enquiry would be to collect the measurements of G at different times from laboratories all over the world. Then these measurements could be compared to see if the fluctuations are correlated.  If they are, we will discover something new.

 If the raw data from laboratories around the world were published online, showing the measured values of G at different dates and times, anyone interested could look for patterns. Are the variations in different laboratories correlated, rather than being random errors? This could be an exemplary exercise in open, participatory science.  

 If you have access to raw data, or would like to help with this project, please get in touch with me at sheldrake@sheldrake.org

Science as Faith 

This Science Set Free podcast is a discussion between Mark Vernon and Rupert Sheldrake about science as a belief system. Mark is a British writer and blogs regularly for The Guardian. His web site is www.markvernon.com

The New Scientific Revolution

Before 2012 slips away it’s worth remembering that this is the fiftieth anniversary of the publication of Thomas Kuhn’s hugely influential book, The Structure of Scientific Revolutions, which was itself revolutionary, and has sold more than a million copies worldwide.  Almost every time you hear the word ‘paradigm’, Kuhn’s book is in the background.

 Kuhn made it clear that science is not simply devoted to the rational pursuit of truth, but is subject to human foibles, ambitions, emotions, and peer-group pressures. A paradigm is a theory of reality, a model of the way in which research can be done, and a consensus within a professional group.  At any given time anomalies that do not fit into the paradigm are rejected or ignored, and ‘normal science’ goes on within the agreed framework.  But at times of scientific revolution, ‘one conceptual world view is replaced by another’; the framework itself is enlarged to include anomalies that were previously unexplained.  Some well-known examples of major paradigm shifts are the Copernican revolution in astronomy, the Darwinian theory of evolution, and the relativity and quantum revolutions in twentieth century physics. 

 Are further paradigm shifts likely?  If science is to develop further, they are inevitable. And as old certainties break down all around us in the economic, financial and political worlds, in science the long-established materialist paradigm is in crisis.

 In physics, there has been a major shift away from the observable towards towards the virtual. Since the beginning of this century, matter and energy as we know them have been demoted to 4 percent of the universe. The rest consists of hypothetical dark matter and dark energy. The nature of 96 percent of physical reality is literally obscure. Meanwhile, the observable physical realm is floating on a vast ocean of energy called the zero-point energy field or the quantum vacuum field, from which virtual particles emerge and disappear, mediating all electromagnetic forces.  Your eyes are reading these lines through seething virtual photons as your retinas absorb light, and as nerve impulses move up the optic nerve and patterns of electrical activity arise in your brain, all mediated by corresponding patterns of activity within the vacuum field within and around you. 

Even the mass of an obviously physical object like a rock arises from virtual particles in hypothetical fields.  In the Standard Model of particle physics, all mass is ultimately explained in terms of the invisible Higgs field, which has a constant strength everywhere. The Higgs boson is supposed to create a cloud of virtual particles in the Higgs field around it, and these virtual particles interact with other quantum particles, giving them mass.

Contemporary theoretical physics is dominated by superstring and M theories, with 10 and 11 dimensions respectively.  These theories are untested and currently untestable. Meanwhile, many cosmologists have adopted the multiverse theory, which asserts that there are trillions of universes besides our own. These are interesting speculations, but they are not old-paradigm materialist science. Reality has dissolved into the physics of the virtual.  

In consciousness studies, materialism is being challenged by a new version of animism or ‘panpsychism’, according to which all self-organizing material systems, like electrons, have a mental as well as a physical aspect.  In his recent book, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False the atheist philosopher Thomas Nagel argues that a shift to panpsychism is necessary for any viable philosophy of nature that does not need to invoke God.  

Meanwhile, in biology, despite the confident claim in the late twentieth century that genes and molecular biology would soon explain the nature of life, no one yet knows how plants and animals develop from fertilized eggs. And following the technical triumph of the Human Genome Project, first announced by Bill Clinton and Tony Blair in June 2000, there were big surprises. There are far fewer human genes than anticipated, a mere 23,000 instead of 100,000. Sea urchins have about 26,000 and rice plants 38,000. Attempts to predict characteristics such as height have shown that genes account for only about 5 percent of the variation from person to person, instead of the 80 percent expected. Unbounded confidence has given way to the ‘missing heritability problem’. Investors in genomics and biotechnology have lost many billions of dollars. A recent report by the Harvard Business School on the biotechnology industry revealed that “only a tiny fraction of companies had ever made a profit” and showed how promises of breakthroughs have failed over and over again.

Materialist science seemed simple and straightforward. But old-style material reality has now dissolved into multi-dimensional virtual physics; increasing numbers of philosophers and neuroscientists are moving towards panpsychism; and biologists are having to think about ‘systems’ and   ‘emergent properties’ that cannot be reduced to the molecular level. 

 Kuhn’s insights, and the subsequent developments in science studies, are not merely of historical relevance, looking at revolutions in the past.  Hopefully we can learn from them today. We are in the midst of a new revolution.  

- Rupert