Monday, September 05, 2011

Consciousness Before Birth and After Death

What is the difference, to consciousness, between being dead and being not yet born, or more exactly, being not yet even conceived? Are they equivalent states of consciousness?

After you die, your individual consciousness ceases to exist. Religion generally denies this, but that is wishful thinking. The main purpose of religion, after all, is to deny death. For believers, who are alive, not dead, this is a comforting, though delusional idea that has no basis in evidence or logic.

So let us agree, for purpose of this essay, that individual consciousness ceases to exist when, or sometime very soon after, all the systems of the physical body cease to function.

Now, how is that different in principle, to an individual’s state of consciousness prior to being conceived? In that condition (or non-condition), there is no physical body to define the boundary of an individual, and with no body, no individual consciousness. Functionally then, being unborn (unconceived) is equivalent to being dead. In both cases there is no individual consciousness because the functioning, individual embodiment to support it does not exist. Individual consciousness depends on individual embodiment – which is not to say that consciousness is caused by embodiment; there is no evidence for that. There is simply a dependency, of an unknown kind, between embodiment and consciousness.

It is only during a brief segment of less than 100 years, while we have a functioning body, that we have a functioning individual consciousness. Prior to the beginning of my tiny moment of individualism, the world extended backward in time beyond history and took place entirely without my presence (difficult though that is to imagine). And after my flicker of time is over, the world will continue on, in some form or other, without me (difficult though that is to imagine). Beyond the boundaries of my particular individual life, my consciousness simply does not exist in the universe. Why then, do we so carefully distinguish between being dead, and being not yet born?

An easy, and wrong, answer, is that the unborn are full of “potential” while the dead are not. This is a linguistic confusion, for “the unborn” do not exist. What the expression means is that some hypothetical individual who might be conceived and born at a future date, would have the potential to have experience, and to cause things to happen in the world. But that is a fact about someone who hypothetically will be alive, not an entity actually unconceived and unborn at this time. That entity does not literally exist yet. Something that does not exist has no potential for anything.

A person might take a God’s-eye view of human life and declare from that omniscient mountaintop that new individuals will be born, and when they are, will have “potential” for life whereas the dead never will again (assuming that dead is forever). But there is no God’s-eye view. We are humans, not gods, and we only have a human point of view, which is not omniscient. To take a God’s eye view is either imagination or self-delusion. If you’re going to pretend you have a God’s eye view of life and death, you might as well imagine reincarnation, or zombies and vampires if you like, because it is unconstrained fabrication anyway.

From actual human, not presumptive divine, knowledge, we can again only conclude that there is no functional difference between the state of (non-) consciousness prior to conception and after death. That conclusion is an inference based on evidence available to living humans.

However, there is a psychological difference that matters to living humans. I have memory of personal experience that seems to extend backward in time before my birth. This is possible through the magic of history. By contrast, except for religious stories, I do not imagine any personal experience beyond my death, since, unlike for history, there is no human evidence that any experience continues beyond death.

Of the uncountable billions and billions of people who have died on this planet, and among the millions who die every day, not a single person has ever “come back” to the living and reported any experience beyond death, or even communicated with us “from the other side” about what postmortem experience is like. In this assertion, I rule out fictional stories, religious fabrications, fraudulent reports, and tales from the mentally abnormal. By comparison, with history, we have written records, fossils, geology, astronomy, genetics and so forth, which give us verifiable, scientific evidence of what happened or probably happened before my individual experience began.

World War II ended before I was born, which seems odd to me, because I feel like I remember it, but that’s because of having studied history. My father fought in WWII and he actually remembers it (or would, if he were not dead). But what would he remember? He would remember his naval experience, his buddies, the situations he was in. He would not remember the entire war, though, because nobody could, because nobody experienced the entire war. People can only literally remember their own experience, not somebody else’s. And yet, after a lifetime of reading about the war, and watching uncountable movies and newsreels covering all aspects of it, I feel I have a personal memory of it, although that is not literally possible because I wasn’t yet born when the war ended. Still, that quasi-memory, a function of internalized history, extends my memory of collective human experience back in time beyond the moment of my conception.

There is a complementary, if not parallel, kind of quasi-experience after death. After a person dies, their memory continues in the collective experience of those who knew that person. In cultures that emphasize ancestor worship, this mnemonic persistence can last quite a long time. Eventually, and inevitably, it fades from the collective memory. Even if a family has an extensive, documented genealogy, we can be confident there is little, if any, collective memory of individuals who lived thousands of years ago, or who lived before history. Some individuals who are deemed noteworthy by a cultural tradition may be remembered less intimately for much longer than average. We collectively remember Albert Einstein, Thomas Aquinas, Jesus Christ, Socrates, and a collection of Egyptian Pharaohs. As more time elapses since a person’s death, the less detailed is the historical record of them and the dimmer the collective memory.

Nevertheless, there is a sense in which an individual’s experience persists beyond death in the collective consciousness of the community in which that individual lived. The dead individual has no personal consciousness or memory, but as long as the community persists, there is yet a persistent psychological trace of that individual’s experience.

To the extent that an individual, while living, defines himself or herself as a member of that community, psychologically constituted of it, then the individual can anticipate being remembered in the collective consciousness after death. That is, in a sense, another form of quasi-memory, an imagined future memory in the minds of the community. That is why some people are so extremely motivated to “leave a mark,” “make a difference,” “leave a legacy,” or otherwise make a noteworthy impression on their community so that their imagined, future, collective memory will persist longer than average.

The quasi-memory after death is actually an imagination of the community’s future rememberance, not a literal individual postmortem memory, but it can be conceived as a hybrid form of postmortem consciousness. In comparison, the quasi-memory of experience before birth feels like an individual form of consciousness, but it is derived from the collective experience of historians, scientists, and the like, and so is also a hybrid of personal and collective consciousness. The two kinds of hybrid quasi-experience have different qualitative feels.

Thus, there is, after all, a difference in consciousness between being dead and not yet having been conceived. While there are hybrid forms of quasi-personal consciousness before birth and after death, they are strangely different, and complementary rather than parallel or equivalent.

Sunday, August 14, 2011

Why Solipsism is Impossible

Solipsism is a huge problem for anyone interested in promoting introspection as a way to understand the mind (which includes me). You can only introspect on your own mind, not on anybody else’s. So technically, all you really know for sure is your own mind. The existence of any other minds is purely hypothetical.

The same would go for the existence of the entire world. If you accept introspectively known sense impressions as valid information, you realize that you have no other information. All your sensory data are known to you and only you, by mental impressions. A touch on the arm is known as the mental feeling of a touch on the arm. The arm itself knows nothing. All you can know for sure is the mental impressions you have of the world. You can’t know if anything else is really “out there.”

In the most extreme form, a solipsist asserts, “I am the only self that exists. All the rest of the world is, at best, a hypothesis, or possibly just a figment of my imagination.”

There is no way to refute solipsism. Any counter-argument against it would just be another figment of my imagination. If it is false, I could never know it, because my own mind is the only thing known to me. Solipsism is an extreme form of idealism, which says that only mental events can be known to exist (or, only mental events do exist).

Once you take introspective findings as valid knowledge, you are confronted with the question, How is introspective knowledge different from other, empirical knowledge, such as scientific knowledge? The difference is that introspective knowledge of one’s own mind is certain, whereas scientific knowledge is hypothetical, merely a set of agreed-upon propositions. Scientific knowledge cannot be certain because it is not acquired through introspection, which gives the only direct, certain knowledge. (Image: feelwelcome.com.uk)

Consequently, in scientific psychology, introspection is not allowed. No introspective observations can be accepted into discussion of how the mind works because introspection is private, and if you accept private data as valid, it takes precedence over hypothetical, consensus-based scientific data, and no further scientific agreement or progress can be expected or achieved. In other words, introspection implicitly carries the threat of idealism, and then solipsism, which is ultimately nihilistic. If my own mind is the only mind that can be known directly for sure, how is a scientific psychology possible? It isn’t. The threat of solipsism therefore is serious. It would destroy everything else. That’s why it is simply outlawed, and so is introspection. And that’s why there is no generally accepted methodology like “scientific introspection.” (Despite that, I have published a book by that title, explaining how it would be possible).

The threat of solipsism is false; not a real threat at all. It is based on a misunderstanding of the human mind, which does not, and cannot exist in isolation from other human minds. One's own self and mind are learned (acquired) from socialization and cannot ever be separated from that context. The image of Rodin’s solitary thinker is profoundly misleading. We are not monads, and never have been.

The philosophical problem of solipsism is posed by abstracting one’s own mind from that of others, but this abstraction presupposes that the world is already given as a shared world. Hence solipsism presupposes its own refutation. It is a confusion, not a valid proposition.

True solipsism would require that I do not experience myself as a single self in distinction from other selves, but that I experience myself as the only self that exists. But that is impossible, for self is only defined by other. So again, solipsism is impossible in principle.

What about a person, say, an infant, who has virtually no self-awareness. Could that person be a solipsist? Such a person does not have the resources to contemplate the possibility of solipsism. So the thesis of solipsism is impossible in principle in this case also.

Suppose a philosopher, using reason and analysis, abstracts the personal self away from its social origins and maintenance, and considers it as an absolute, transcendental ego, disconnected from all others. From that position of the abstracted transcendental ego, could solipsism be taken seriously?

Husserl, inventor of the transcendental ego, might seem to have believed that. But he also wrote that only his reflections on intersubjectivity make “full and proper sense” of the transcendental ego (Husserl cited by Zahavi, 1996). This is why Husserl claims that a phenomenological discussion of subjectivity in the end turns out to be a discussion not simply of the I, but of the we. Thus once again, even from the position of the transcendental ego, solipsism is not possible in principle.

What is possible: An object can be experienced in different mental attitudes. Hegel noted that a book can be experienced by the senses not as a book, but as merely an existent object with properties, not as a social, historical object with meaning. So it is possible to “pretend” or imagine that one’s own self is merely an existent object, divorced of its social meaning. But that is imagination. We can imagine flying pigs, too, but that doesn’t prove a thing. We can imagine an isolated, mondadic self, but to take that fantasy seriously is the delusion that constitutes solipsism. So that solves a problem you didn't even know you had. Don't thank me.

Reference
Zahavi, D. (1996). Husserl's Intersubjective Transformation of Transcendental Philosophy. Journal of the British Society for Phenomenology 27 (3), 228-245.

Thursday, June 23, 2011

What do you know when you know you are going to sneeze?

What causes a sneeze? Is it a tickle in your nose? The sneeze is a surprise, a reflex, not a response to a stimulus comparable to the one that leads you to brush a mosquito from your arm. When you’re going to sneeze, you open your mouth and get ready. Sometimes nothing happens and the sneeze “goes away.”

We should assume that a sneeze is a response to some biological event. You can’t sneeze at will. It is a reflex response to something going on in the body. Most probably a sneeze is a response to an irritation of the mucus membranes somewhere in the nasal passages.

But I have no awareness of my mucous membranes, in the nose or anywhere else. I can’t visualize them; they don’t give me any information; and I am unable to introspect on their state of being. This is true for most of the inside of the body. We have no direct mental access to its various states of being. You can feel a pain in your knee, but you cannot introspect on the various parts of the knee itself. You know when your bladder is full, but you have no direct mental communication with your bladder.

Yet there is warning for a sneeze. Rarely, if ever, is a sneeze completely a surprise. We are aware that a sneeze “is coming.” What is the nature of that awareness?

My hypothesis is that we are aware of a particular kind of brain activity that is distinctive enough to be discriminated from others, and associated with the actual sneeze. How that could be so is a mystery. The brain does not give off any sensory data, like the heart does. I can hear and feel my heart beating so to that extent I am aware of its location and activity in my body. But I have no direct awareness of my brain. I only know it’s in my head because I have been told. I can’t feel it in there. It doesn’t make any noise and doesn’t jump around.

But somehow, we can discriminate brain states from each other. We know the difference between having a full bladder and a pain in the knee and being about to sneeze. But since we do not have direct awareness of the brain, we have no easy way of describing these brain states, so we talk about them in terms of associated effects. For example, the sneeze itself is sensory and observable, so we say of the pre-sneeze condition, “I am going to sneeze.” All the talk is about the sneeze. But actually, what we’re referring to is not the sneeze-to-be, but the pre-sneeze condition of the brain, which we have learned to discriminate but not name.

Other examples of awareness of, and discrimination of, specific brain states include being aware of blood sugar level, pre-orgasm, pain, a feeling of nervousness or restlessness, being overcaffeinated, and being drunk. We talk about these brain states in terms of their observable bodily effects, but actually, we can discriminate the phenomena as specific brain conditions before there are overt bodily effects.

I think the most dramatic example of being aware of a brain condition, without being able to name it directly, is dreaming. We make up all sorts of fantastical stories upon awakening, because we have no vocabulary for naming or discussing the brain activity that we just experienced.

It is inconceivable that someone properly socialized would not be aware of their heart. We have anatomy books, the history of medicine, the doctor’s stethoscope, Poe’s story of the “Tell-tale heart,” and so on. But we do not have comparable socialization in our culture to name and discuss brain activities. We don’t even have any reliable visual imagery to attach to different brain states. That’s too bad. If we did have appropriate vocabulary, we could contribute a lot to understanding of the brain simply by discriminating and naming its various states when they occur.

We don’t understand the interface between the biological neurology and the mental experience, but the answer is, when you are about to sneeze, you are aware of a particular brain state.

Wednesday, May 11, 2011

Why is Logic Logical?

For years I have puzzled over the validity of logic. Why does one idea compel another? What is the nature of that compulsion? For example, why is the “law” of the excluded middle true: A thing cannot be, and not be, simultaneously. A equals A, and A does not equal not-A. There is nothing “in the middle” between A and not-A. That’s what Aristotle said, and it’s been true ever since. But is it only true by convention, or does logic follow some natural laws, either laws of the world or laws of the mind?

In day-to-day experience, the middle is not excluded. There is the luxury car and there is the economy car, and plenty of choices in between. There is one dollar, and no dollars, and fifty cents in between. There are guilty and innocent, and shades in between. So why is it true that there is nothing in between A and not-A?

At first consideration it seems that the difference is that the law of the excluded middle is about existence. It says a thing cannot BE and not-BE simultaneously. That’s about what IS. By contrast, everyday examples are all about degrees of qualities that all exist. The economy car exists, and so does the luxury car, and all the ones in between. The qualities of price and value vary along some (abstract) dimension, but all of it exists.

But we cannot say that THIS particular car (not in the abstract, but this one right here) exists and doesn’t exist at the same time. Why not? Because that would be illogical. But why? That is the question.

Is it a matter of abstraction? In algebra, which is very abstract, we all agree that A cannot be equal to not-A. that is uncontroversial. But we refuse to say the same about a particular stone.

The difference seems to boil down to what exists and doesn’t exist. But how is that determined? How do we know what exists and doesn’t exist? Do flying elephants exist? Well, yes and no. It depends on what you mean by “exist.” They exist in animated movies and in the minds of millions of children, but not on game reserves in Africa.

So do we restrict the scope of the question to things that exist physically, not mentally? That would seem an arbitrary restriction. Anyway, it would make algebra and logic, and science, higher mathematics, and law, and much else, not susceptible to the law of the excluded middle, and by extension, not susceptible to logic and reason. The purpose of logic is to bring the order of reason to abstraction. So it can’t be right to exclude mental abstraction from logic.

Besides, even in the so-called physical world, there are counterexamples to the law of the excluded middle. Light exists as light waves and as photons, simultaneously. That seems to violate the rule, doesn’t it? Hawking radiation around a black hole exists and doesn’t exist at the same time. There aren’t too many examples like that however, and in general, we tend to quarantine the principles of relativity theory when we consider logic in general.

I think the answer lies not in abstraction itself, but in the human capacity for discrimination. When we are ignorant of a thing or a topic, we cannot perceive distinctions. Someone who does not know wine literally cannot distinguish between cabernet and merlot. A person who does not know philosophy cannot tell the difference between Kantian and Cartesian ideas. Someone who does not know airplanes cannot tell if they are about to board a Boeing or an Airbus. I remember once looking over a locksmith’s shoulder as he fixed a lock on my door. “Look at that!” he exclaimed in disgust when he took off the outer cowling to expose the insides of the lock. “The quality these days is just disgusting.” I saw nothing but a jumble of metal parts. I wasn’t disgusted because I didn’t know what I was supposed to be seeing. I failed to discriminate what he did.

After training or other experience however, it becomes possible to discriminate parts from wholes and parts from other parts. Then a person can discuss the merits of cabernet and merlot, or well-made from poorly-made lock mechanisms. It works the same in the world of abstract ideas. It takes instruction or experience to discriminate democracy from authoritarianism and A from not-A.

Simple sensory discriminations enable abstraction. A door lock is a door lock, but a well-made lock is an abstraction, it is a kind of lock, or a category of locks. Once the discrimination has been made and conceptualized, multiple instances of a like kind can be grouped into an abstract category.

Thus “dog” is a category of animals, but that abstraction was developed only after I became able to discriminate dogs from cats, and from other kinds of animals. In turn, that discrimination was explicitly taught by parents and teachers, who dwell obsessively on helping children discriminate categories of animals. Why that is considered important is a separate mystery. Finally, there must have been some sensory discrimination at the bottom, by which I learned to identify my dog, a particular, concrete, sensory dog, as a “dog” and discriminated it from myself. So the sequence of abstraction goes from a particular, sensory being that exists right now in my presence, to a category of all such animals, which are then further discriminated and contrasted with other animals, and so on up the chain of abstraction.

The sequence of discrimination, conceptualization, and categorization is so automatic that I suspect it is a faculty of the human mind. Teachers teach us how to discriminate and identify, and categorize dogs, cats, forms of government, and much else, but nobody teaches us how to discriminate in the first place. We just do it.

Other animals discriminate in a similar way. In classical conditioning, a type of learning, the dog learns to salivate when the bell rings. Why? We say the dog has “associated” the bell with forthcoming food. However the dog first had to discriminate the bell from the general background noise, and also the occurrence of food from other events, and also the fact that the bell sounds just before food appears. Those are all sensory discriminations that the dog learns fairly easily, without the benefit of language. As far as we know the dog does not conceptualize any of it, but does manage somehow to generalize a more-or-less abstract category about what we would call the conditioned stimulus, because if a buzzer is sounded instead of a bell, the dog salivates in the same way he would to the bell. He obviously has an abstract category of sorts.

I’m not aware of any animal species with a nervous system that is not susceptible to classical conditioning, so I would have to conclude that discrimination and abstraction are built into the architecture of animal neurology.

Does that answer the question of what compels one idea to follow another and why logic is logical? Partially, it does. But the rules of logic are themselves so abstract that it is difficult to believe they are neurological manifestations. Suppose a proposal says that if p exists, then q will always occur. But if we look and find that q did not occur, what is the only logical conclusion? It has to be that p does not exist. This rule is the absolute foundation of reasoning in science and statistics. What makes it valid?

According to the analysis given here, that rule of logic, called modus tollens, is valid because it is an abstraction of sensory, bodily experience that many humans have discriminated and agreed is universal. We have all observed that if the bulb inside the refrigerator is working, then when you open the door, there will be light. If you do not see light, the conclusion is that the bulb is not working. Enough people have had experience like this, so that as a community, we have agreed the relationships involved are worthy of becoming a “law,” the law of modus tollens. It’s logical because we all say it is, not because of neurology.

The implication of this finding is that reason compels one idea to follow from another because of generalization of discriminations that many people have similarly made and conceptualized and categorized. The validity of logic is a social construct, not a natural phenomenon.

So what are we to make of the situation where people do not agree? Different groups insist that their god and only their god exists. Is there any concrete sensory discrimination at the bottom of those abstractions? I would say, no, and virtually all scientists would agree with me. Are there neurological differences supporting the abstractions? No. The human nervous system and brain is 99.999% similar across individuals.

But are there discriminations among abstract ideas beneath the disagreements? Of course there are. Different groups have different ideas about history, justice, virtue, beauty, and many other abstract categories, and they assiduously teach these discriminations to their children. Higher abstractions are based on discriminations made among lower abstractions and it is around these higher abstractions that wars are fought. Fundamentally though, the mid-level abstractions upon which they are based do not rest upon sensory discriminations. The validity of logic in the abstract realms is socially constructed.

At the bottom we are all the same kind of animal and make the same kinds of sensory discriminations and the same kinds of basic abstractions. It is only our teachers that guide us to abstractions among the abstractions, and therefore to differences we will kill for. Anybody can discriminate a brown skin from a white skin, narrow eyes from round eyes, male from female, but what those differences mean must be taught to us. There is no universal sensory or neurological basis, and therefore no intrinsic rationality that justifies what our teachers make of those differences. Whether my god or your god is the true god, is essentially culturally constructed, and we would say, “not logical.”

Ideas compel other ideas then, not because there is some intrinsic validity to the rules of logic that make it so, but only for two reasons.

One, because concrete, sensory discriminations that anyone, even a dog, can make, seem universal, as in classical conditioning. Red is different from blue, and we all agree on that, regardless of culture. Therefore it is “logical” to insist that Red cannot be Blue and vice versa.

And Two, logic is logical because the teachers in a cultural tradition decide, based on contingent values (that is, arbitrarily), that some abstract ideas “should” compel other abstract ideas. That compulsion is valid inasmuch as everybody lives in a culture and nobody can live outside of culture, so nobody is immune from cultural values. So if “The Bible is the word of God,” it follows that the Biblical God is the “correct” God. That is cultural logic.

These two kinds of logic are both valid, but for different reasons.

Saturday, January 08, 2011

New Evidence for ESP?

The Journal of Personality and Social Psychology, a respected academic journal published by the American Psychological Association, will soon release an article by Cornell psychologist Daryl Bem, that supposedly demonstrates the existence of “extrasensory perception,” or ESP. A preprint of the paper is available at http://dbem.ws/FeelingFuture.pdf.

ESP is a term used in popular culture for unexplained psychic effects. It is used exclusively, for example in the New York Times article of Jan 5, 2011 reporting on Bem’s paper (http://www.nytimes.com/2011/01/06/science/06esp.html?src=me&ref=general). Academics refer to such effects either as “paranormal,” “parapsychological,” or “psi” phenomena. These psi phenomena allegedly include a potpourri of unexplained effects, such as mental telepathy, remote viewing, clairvoyance, telekinesis, precognition, and communication with the dead, to name just a few varieties. Bem’s paper focuses on precognition, which is unexplained knowledge of the future, and premonition, which is the same thing only felt emotionally instead of known intellectually.

The paper reports nine experiments, only 4 in any detail, that were conducted over a decade, with a thousand people tested. In a typical experiment, participants make a prediction about where a picture, (called the “stimulus”) will appear on a computer screen. If the prediction is correct, then either it was a lucky guess or, the person had a premonition of where the stimulus was going to be. In a typical experiment, participants had to guess whether the picture would appear on the left or right side of a computer screen. Random guessing would produce a 50% correct rate, but the guesses were correct 53% of the time, a percentage greater than chance. That doesn’t seem like much of a difference, but since the test was run many times on each person, the finding is statistically rare enough that it is probably meaningful. Therefore, according to Bem, a slight, but scientifically proven result of precognition, or premonition of the future, was demonstrated.

Bem notes in his paper that “Psi is a controversial subject, and most academic psychologists do not believe that psi phenomena are likely to exist.” That is correct, and I am one of those psychologists. I do not believe any psi phenomena have ever been demonstrated scientifically, nor indeed that they exist at all. How do I explain then, scientific findings such as Bem’s (and there have been many such supposed demonstrations of psi phenomena over the years)? There are four obstacles to acceptance that any such scientific demonstration must overcome:

Methodology. . The experiment must be designed and conducted in such a way that the best, most reasonable conclusion is that psi phenomena have been demonstrated, rather than some other explanation, such as pure chance, lurking (uncontrolled) variables, equipment or procedural error, biased sampling, unintended clues being given to participants, inadequate experimental controls, or other kinds of unintended bias or error (deliberate fraud is not considered, as that is rare and easy to detect).

2. Statistical. The experimental data must be conceptualized, analyzed and reported in a simple, correct, and non-controversial way, so that the best, most reasonable conclusion is that psi phenomena have been demonstrated. Even if the experimental procedure was sound, the statistical handling of the data can introduce biases that lead to invalid conclusions, such as when the data are manipulated inappropriately (e.g., leaving out some data from the analysis), or conceptualized strangely (such as counting certain results in one way, other results in another), or analyzed with controversial or questionable statistical techniques, or interpreting the outcome in obscure or inappropriate ways.

3. Theoretical. The findings must be coherent with an existing body of scientific data, or if they are not, some revision in understanding of the existing data must be specified which accommodates the new, anomalous finding. There are two reasons for this requirement.

One is that according to the scientific method (a consensus model of scientific reasoning), the hypothesis that an experiment tests is drawn from the existing body of scientific data. A scientist does not just wake up one morning with a hypothesis that says, “I suspect that giraffes would float in water as well as raspberry marshmallows.” That is not how science is done. Instead, the scientist finds areas in the existing body of scientific knowledge where there are questions, errors, gaps, unexplained connections, or incomplete understanding. A hypothesis is then generated that could extend the existing knowledge or make it more understandable or more internally consistent.

The second reason for requiring that scientific findings must mesh with existing knowledge is that science is a cumulative exercise in knowledge production. Even if some arbitrary and idiosyncratic hypothesis were experimentally tested and confirmed, the result would be uninterpretable because it would not connect to existing knowledge, would not further general understanding, and would not even contradict what is already known. There would be no context for making sense of the experimental result, making it essentially meaningless, no matter what it purports to demonstrate.

Historically, strange things have sometimes been found in nature that could not be explained until much later, such as lightning or x-rays. But technically, those discoveries were anomalous observations, not scientific findings, until some explanation was hypothesized that could be tested under a scientific hypothesis.

4. Philosophical. A scientific finding that meets all of the foregoing requirements still must be interpreted in a scientific way. For example, a finding that concludes, “All human beings are therefore merely ideas in the mind of God,” cannot be accepted without a great deal of further explanation. The interpretation of the finding must conform to principles of scientific reasoning and evidence. This examples fails on both counts, because there is no scientific evidence of God, and to characterize humans as ideas rather than as biological objects is not consistent with scientific reasoning.

Alternatively, the interpretation of the finding can go too far in the other direction, being so scientifically overspecified that the result admits of no generalization, an error of “external validity.” An example would be an experiment that claims to study “violence in children” but defines violence as a high frequency of button presses on laboratory equipment. Since that does not describe what we normally understand as violent behavior, even if the study meets all other criteria, we are unable to say anything about the result beyond the specific experimental procedure.

Another common error is that an study defines its variables in terms of laboratory procedures, but interprets its results in different terms, an error of “internal validity.” In the example above, if college students were used as participants, it is not valid to conclude anything about violence in children.

Bem’s studies that purport to demonstrate psi phenomena fail to overcome any of the obstacles described, and therefore I remain unconvinced of the existence of so-called psi-phenomena.

To prove this definitively, I would have to study the experimental protocol, data, and statistics to make detailed criticisms, and that would require either that I have access to Bem’s laboratory notebooks, which is not going to happen, or I would have to repeat his experiments, step by step, in order to understand what he did and what kind of data he obtained. That is also not going to happen. So, like any other ordinary consumer of scientific information, I must base my acceptance of, or criticism of the experiments based only on the scant information provided. Here are some criticisms then, within that constraint.

Summary of Bem’s Experiment 1

1. Methodological factors. In Experiment 1, a featured experiment supposedly demonstrating precognition, participants had to guess which of several pictures would be randomly shown. I’ll start by summarizing the experimental procedure.

One hundred Cornell undergraduates were self-selected (volunteer) participants, half men, half women and were paid for their participation. They all knew it was an experiment in ESP.

A picture of starry skies was shown on the screen for three minutes, while new-age music played. Then that picture was replaced (and presumably the music terminated, although that is not stated), with two pictures of curtains, presumably side-by-side (although that is not stated). When a participant clicked on one of the pictures of a curtain, it was replaced with another picture, either a picture of a wall, or a picture of something other than a wall.

The content of the “other than a wall” pictures is not described, except to say that 12 of the 36 non-wall pictures showed humans (presumably – this is not specified) engaging in “sexually explicit acts” (not further described), while another 12 of the non-wall pictures were “negative” in emotionality (not further described), and the final 12 non-wall pictures were “neutral” (but not described).

All these pictures had previously been (although when, is not stated) rated by other people not in this experiment as being reliably “arousing” for males and females (although “arousability is not defined), or as being reliably “emotional.” There is no information about whether any arousing pictures were also emotional, and it is hard to imagine that they were not. There is no definition of what constituted a “neutral” photograph, and there is no description of the arousability or emotionality of the wall picture or of the curtain pictures.

Part way through the experiment, some or all (not specified) of the “arousing” pictures were replaced by more intense (not otherwise described) internet pornography pictures, which were not reported to be scientifically rated for arousability and emotionality, so in the end, the nature of these pictorial stimuli is essentially unknown. (We assume that among the 36 non-wall photographs, none of them was in fact, of a different sort of wall, although this is not actually stated.).

The non-wall pictures were selected at random from the group of 36, with randomness defined by a software algorithm. Whether the wall or non-wall picture was placed on the left or the right of the screen was also randomized by the computer.

Each participant’s task was to click on one of the two pictures of curtains to indicate which one they thought would be replaced by a non-wall photograph. They were told that some of the pictures were sexually explicit and allowed to quit the experiment if that was objectionable. No information is given on how many participants quit. After the participant’s choice was made, the curtain picture was replaced by another picture, either of a wall or a picture of non-wall content.

Errors of Internal Validity

That summarizes the experimental protocol. According to Bem, this methodology made “the procedure a test of detecting a future event, that is, a test of precognition” (p. 9). However, that is not how the results were recorded. You would think that the scientist would simply record whether or not the participant had correctly predicted which side of the screen the non-wall picture had appear on (since that was the instruction given to the participant, and that was the hypothesis to be tested). Instead some other, strange measure was recorded, namely, the number of correct predictions of which side of the screen the “erotic” (meaning sexually explicit) pictures occurred, even though that was not the hypothesis being tested. This odd recording of the results constitutes an error of internal validity.

The hypothesis that college students will be better at predicting the location of a sexually explicit picture is unconnected to the introductory literature review, which referred only to a previous body of findings that asked for straightforward predictions of visual content, with no special reference to sexually explicit material. This new (implicit) hypothesis is then, essentially like the “giraffe and marshmallow” hypothesis, arising “out of the blue” rather than being logically derived from existing knowledge. This is another methodological error. If there is, in fact, a previous body of knowledge about predicting the locations of sexually explicit photographs, then the error is one of scientific reporting, since the literature review was obviously grossly incomplete.

One other, rather minor error, is the experimenter’s referring to the participants’ prediction of the location of a non-wall photograph as a “response” to that photograph. But this is a semantic distortion, since the participant’s choice is made before the photograph is shown. Ordinary, common-sense language would call that choice a “prediction” not a “response.” For the experimenter to call it a “response,” presupposes the validity of his belief that the participants are seeing into the future, but until that is proven by the experimental results, it is scientifically inappropriate to use the language in a non-standard way without justification.

Statistical Errors
Next, Bem reports that participants correctly predicted the position of the sexually
pictures significantly more frequently than the 50% rate expected by chance, and in fact were correct 53.1% of the time. But this is an incorrect analysis. To be counted as correct, a prediction would have to correctly say on which side of the screen a non-wall photograph would appear (one chance out of two, or 50% chance rated) AND, if that guess were correct, they would also have to predict that it was a sexually explicit photograph (12 chances out of 36, or 33%) for an overall probability of 0.50 x 0.33 = 0.165, which means that one would expect a person to guess correctly fewer than 17 times out of a hundred.

Did that happen? No information is reported on how many times the participants DID actually predict the location of sexually explicit photographs. It was not 53.1%. That is the number you get when you ignore, or leave out of the calculation, all the wrong predictions of the non-wall photograph. But that is an illegitimate way to count the results, unless there is a very good reason, and none is given.

Still, can we at least say that the participants correctly predicted the location of ANY non-wall photograph better than chance (53.1% vs. 50%)? No we can’t, because that information is not reported either. Instead, what is reported, is that participants predicted the location of only the sexually explicit photographs at 53.1%. But that leaves out all the results for the non-sexual predictions, which is not a legitimate way to count the results. So in the end, the results that bear on the experiment’s stated hypothesis are not reported at all.

This kind of anomalous counting of the results constitutes a statistical error and makes the experimental findings uninterpretable.

The same kind of anomalous, illegitimate, and uninterpretable counting of results is given for non-sexually-explicit pictures, emotional pictures, and neutral pictures, and even for pictures that were “romantic but non-erotic pictures,” a category that was never defined in the description of the pictures (let alone in any experimental hypotheses).

The experiment also reports that there were no significant differences in response findings between males and females. That is a legitimate “control variable” to be reporting, although the experimental hypothesis being tested has nothing to do with gender. So that is not an error, as much as an irrelevance.

Then the experiment reports on a history of findings in other experiments that turns up a small correlation between the ability to predict the occurrence of visual materials at a rate greater than chance, and the participant’s score on an extraversion test, with extraverts supposedly being better at making such predictions over introverts. There are two problems with this so-called result.

One, is that it is based on a statistical technique called meta-analysis, in which the main findings of individual experiments are treated as if they were individual response data points observed in individual participants. While this statistical technique is now widely used in the medical literature, it is by no means without controversy when applied to psychological experiments, and I reject it as a valid statistical technique for psychology.

The main reason for my rejection is that the technique generally does not take into account the quality of the underlying experiments, or if it does, does so inadequately. For example, if some future meta-analysis includes this experiment, that will introduce significant undocumented error into the meta-analysis because this experiment does not actually report any valid results, despite its claim to the contrary.

The second problem with this so-called reported result between predictive success and personality is that it is irrelevant to any scientific hypothesis, implicit or explicit, that was supposed to be tested by this experiment.

Errors of Interpretation:
The experimental report goes on at great length to determine what “kind” of psi phenomenon had been demonstrated by the test results (which were never properly reported). Was it simple clairvoyance or was it a subtle form of psychokinesis? Or was it actually pure chance (admirably, the report does consider that possibility).

But a simpler explanation is hinted at by the experimental procedure itself. After the participant made his or her prediction of where the non-wall photograph would appear, the curtain picture chosen was replaced with either the wall, or a non-wall photograph. This essentially gave the participant feedback on the correctness or non-correctness of the prediction. But why was that necessary or desirable?

The experimenter knew immediately upon the participant’s click whether the prediction was correct or not. That could be scored right on the spot by the computer. Why was it important to give the participant “feedback?” The experimental hypothesis was about ability to see into the future, precognition. Why is feedback necessary to do that? Was the hypothesis really that ability to predict the future can be taught by a computer and learned with practice? There is no theoretical or practical reason to believe so, and the experimental report does not suggest it.

The only reason I can think of to give the participants feedback on the correctness of their predictions is so that they might be able to learn from their mistakes and improve their performance. That is a standard learning procedure going back over a hundred and fifty years in experimental psychology and thousands of years in human experience. The experiment thus introduced a spurious learning paradigm into a procedure that was supposed to test only ability to predict the future. That is a serious error of internal validity that renders the experiment uninterpretable (if it was not already).

What would the participants be learning, with this embedded learning procedure? I am unable to say without more detail about the experiment. Could they be learning (even if only implicitly) to detect a non-random pattern in the order of presentation of the materials? A non-random pattern could have emerged. Either the random number generator could have been imperfect (since there is, theoretically, no such thing as a perfect random number generator), or, within the pseudo-random sequence of events, an identifiable non-random pattern could have emerged, just as when one flips a fair coin “heads” 7 or 8 times in a row, just by chance. These things happen. It wouldn’t take much non-randomness to produce a mere 3% deviation from chance expectations.

Or, more likely in my opinion, the participants could have been learning something else, some other clue that was unintentionally left in the procedure by the experimenters. I cannot say what that might be. For example, it would be interesting to know if an experimenter was in the room while the participant performed. There is no reason why one should have been, but if one was, there are all kinds of opportunities for subtle, unintended clues (or “experimenter effects”) to be transmitted to the participants.

Bem reports that he re-ran the experiment but using randomized, simulated computer inputs for the “predictions,” with no human participants involved. Under those conditions, no psi effects were detected. I am not surprised, but that result furthers my skepticism about the human-based findings: that if there really were any legitimate ones (which I doubt), they were due entirely to unintended experimenter effects or performance biases.

The only way to satisfy my skepticism on this point would be to re-run the experiment, with humans, but omitting the spurious learning component from the procedure, and isolating the participant completely from any contact with the experimenter or any other participant. I would be extremely surprised if any so-called “psi effect” were reported under those conditions.

Theoretical and Philosophical Errors:
Aside from the methodological and statistical problems with this study, there are additional theoretical and philosophical problems. First, I must emphasize again that no psi phenomena were demonstrated by any of these experiments, as reported. But even if there were such a thing as psi phenomena, for example, ability to predict the future at a rate better than chance, what sense would it make?

There is no known mechanism, either biological, physical, or psychological, by which that would be possible. Human beings are simply not able to predict the future very well. Would that it were otherwise! Bem does some hand-waving around quantum indeterminacy and the earth’s magnetic field to suggest possible explanations of psi phenomena (if they existed), but that verbiage constitutes, most generously, only loose metaphor, nothing close to an explanation.

Could the explanation of psi effects, if there were any, just turn out to be something bizarre, something we have never thought of yet, not related to anything familiar, not like anything ever reported in the accepted scientific literature? Well, yes, that is possible in principle. I’m sure Socrates himself would not have been able to understand a butane lighter or a sheet of plastic food wrap, let alone some of our more complex technological marvels. So it is not a denial of the possibility of psi phenomena to assert that there is presently no conceivable explanation of them, as they have been described. But it is utterly idle to speculate on explanations until the phenomenon to be explained has been demonstrated, and I am not convinced it ever has been.

Conclusion:
In his forthcoming paper, Bem describes three additional experiments, similar to the first one, in some detail, and refers to five others, not fully described. However, as is always the case when I take the trouble to read such experimental reports, after analysis of the first one (an analysis that was by no means exhaustive), I simply have no energy to go on to the rest of them. The quality of the first one is so poor that there is little promise that the others will be much better. So I give up at this point and return to my default belief, that has not been challenged by Bem or anybody else, that no psi phenomenon has ever been scientifically demonstrated. Show me a proper demonstration and I’ll change my mind.