EXPLORING THE LOST WORLD ____________________________ _____________________________________OF THE HUMAN BRAIN
FIRST THING WE NEED TO NOTE is that Freud is dead. No, we don’t mean the famous 20th century psychologist Sigmund Freud who died in 1939 at the beginning of World War II after struggling for years with cancer (Freud didn’t listen to his doctors, and he really, really liked to smoke cigars). We mean Freud’s way of thinking about how the brain works with the world popularly called Freudian psychoanalysis—although, yes, not every psychologist practicing today would agree with us that Freudian thinking is totally dead and buried.
The psychologist and Nobel Laureate Eric Kandel observed in an insightful overview published 1999 that this remarkable man revolutionized our understanding of the human mind during the first half of the 20th century. Unfortunately, as Kandel goes on to say, during the second half of the last century Freudian psychoanalysis did not evolve scientifically. It did not develop objective methods for testing Freud’s excitingly original ideas. As a consequence, Kandel gloomily concluded in his benchmark essay, psychoanalysis entered the 21st century with its influence in decline.
With the passing of psychoanalysis as an instructive way of thinking about how your mind works, nothing comparable in its scope and helpfulness has taken its place, leaving most of us today without a workable framework for understanding ourselves and why we do what we do. As Kandel concluded in 1999: “This decline is regrettable, since psychoanalysis still represents the most coherent and intellectually satisfying view of the mind.”
Please note: this commentary, recovered on 3-Febr-2017, was originally published in Science Dialogues on 16-June-2014.
Abstract: According to some, the current debate in psychology about “direct replication” as a way of being vigilant against scientific fraud and sloppiness is devolving into a boxing match fostering snottiness, snark, and downright bullying. However, focusing on the downside of this call to arms may be sidetracking us from attending to a more fundamental question—when is research replication the right thing to do?
ONE OF THE THINGS I LEARNED while struggling to write a book about friendship, human nature, and evolution is that neuroscience and neurotic are not all that far apart. Before saying why I got this impression, however, I need to say something first about psychology today.
While reading journal articles garnered using Google Scholar I got the impression that different researchers working in different laboratories aided perhaps by different sorts of machinery are not only coming up with seemingly incompatible conclusions about how the human mind works (e.g., is there or isn’t there a lateral bias to creativity up there in the cranium?), but also that the left hand isn’t always aware of what the right hand is doing. Different research fiefdoms seem to be chugging along more or less unawares of how others are tackling the same issues. And I had the suspicion few are trying to replace the out-dated unity of wisdom of Sigmund Freud with anything approaching a holistic model of the mind. Why so, if this is true?
I am willing to admit my ignorance, but am I wrong to think experts in neuroscience nowadays are a lot like the famous blind men and the elephant? Each research team may have a firm grip on a piece of the puzzle, but does anyone really know how that beast called the brain actually works?
But wait a minute. What’s neurotic about the picture I am painting? A recent blog exchange between my friend Jim Coan at the University of Virginia and the anonymous science blogger Neuroskeptic has brought me some enlightenment on what sure seems like neuroticism to me.
According to Coan, there is currently a strong push within the field of neuroscience and psychology in general for something called “direct replication” (Klein et al. 2014)—a push that he finds both charming and naive. His real beef, however, is that some are taking this push to mean what might be called “replication failure” (my phrase, not Jim’s) is not just a worry confined (to succumb to a bit of word play) to the boudoir. Failure to replicate, real or perceived, evidently is giving rise to a rash of social nastiness he labels Negative Psychology that strikes me as being akin (or so it would seem) to the worst excesses of the post-modernist critique. “When we criticize each other using the tropes of Negative Psychology—that is, with moral outrage, hostile humor, and public shaming—we train the public to either disregard science altogether, or . . . to confuse outrage with rigor.”
While Coan points his finger as one case in point at Neuroskeptic anon., the latter in response has pleaded not guilty. In fact, Neuroskeptic anon. says he (or she) and Coan are on pretty much the same page and wavelength, and darnitall Jim would know this if he had bothered to read everything Neuroskeptic anon. has written in her (or his) blog over the years since ca. 2008.
I am not sure I should confess this, but I am not a great fan of blog sites. Until Jim’s entry into the fray (his first, by the way) I had paid scant attention to Neuorskeptic anon.’s corpus of writings on the web. Nor do I want to weigh in now as a qualified referee for minding the rules of the noble art of blog boxing. But I do agree with Coan on one thing.
He begins his own blog piece with this statement: “People on all sides of the recent push for direct replication—a push I find both charming and naive—are angry.” I think I know charm when I see it, and I don’t find much that is charming about what’s happening in the sciences of the psyche. But I do think the word naive is worth taking to heart.
According to some, skepticism is fashionable these days, and not just in psychology. One could argue, for example, that this is also a core tenet of climate-change deniers and the Tea Party in the U.S.A. Furthermore, who anywhere on earth could possibly deny that the replication of research results is the gold standard of scientific excellence?
Well maybe here and now and maybe me.
Perhaps more so than Jim Coan may be prepared to argue judging by his blog on Negative Psychology, I would at least like to cast a stone or two in that general direction. By focusing as he and Neuroskeptic anon. do on snark and snottiness at the core of modern skepticism in its many stripes, I think they may both be getting sidetracked from attending to a more central issue—namely why does anyone think research replication is such a good thing to do?
No doubt about it, failure to replicate research results may certainly be a flag on the field, but as Coan has said, anyone with a respectably nuanced view of why replications may fail knows they may do so for all kinds of reasons. What would be naive is to accept that not failing to replicate is proof of the pudding.
Why is this naive? Because doing the same thing over and over again in precisely the same way may amount to little more than making the same damn mistake over and over again–and thereby arriving at the same (erroneous) resolve over and over again. Said differently, direct replications that are just repetitions of the same-old same-old ought to be taken with a grain of salt and viewed with suspicion.
Now am I suggesting that like the United States in 1933 scientists should go off this gold standard? Maybe.
In 1966 the biologist Richard Levins published a paper on model building in population biology that has become a classic in the practice and philosophy of science (Levins 1966, 1993). I have long felt that Levins was leaning a lot on what Henri Poincaré (1905) and Alfred North Whitehead (1938) had written about such matters, and should have said so. Nonetheless, I am not alone in thinking what Levins wrote was inspirational and wise. And one of his main conclusions has become famous: “truth is the intersection of independent lies’’ (1966:423).
What he meant by this provocative statement has been richly discussed and debated (e.g., Levins 2006; Odenbaugh 2006; Orzack 2005; Orzack and Sober 1993; Weisberg 2006a, 2006b). One of the pragmatic lessons, however, taken home after reading his paper (as indeed after reading Poincaré and Whitehead) is that for all sorts of reasons there is no such thing as the definitive single approach, experiment, or scientific model capable of capturing reality in all its chameleon-like complexity.
Therefore, as Levins wrote retrospectively in 2006, we need different ways of converging on the truths we are looking for. Consider this:
In the dispute about climate change, a rising temperature in several cities is suggestive. Adding more cities to the list gives a diminishing return. But independent lines of evidence—ocean temperatures, cores from glaciers, decline of coral reefs, spread of species into places that had been too cold for them, accumulation of greenhouse gasses—each may have some separate idiosyncratic explanation or source of error but jointly converge on an unavoidable conclusion. We have to seek lines of evidence as independent as we can in order to support a large scale conclusion. (Levins 2006:753)
Where am I going with this? The strategy Levins is talking about (as did Poincaré and Whitehead before him) is not the one at the heart of the current drive in psychology and other sciences to replicate evidently successful experiments others have done. No, instead the take-home directive is this one: Can we do a different experiment to see if we get the same resolve? And if not, why?
If this strategy were routine, then there would be no doubt about it. To repeat earlier experiments that led to different results if nothing else is a way to become more confident before making headlines with what we have just done that we have given others due and proper benefit of the doubt. But this wouldn’t be something that might be called “knee-jerk direct replication.” This instead would doing something called “just good science.”
Please note: this commentary, recovered on 9-Jan-2017, was originally published in Science Dialogues on 20-Feb-2015.
In his acclaimed novel The Oxford Murders, the Argentinean writer and mathematician Guillermo Martínez engagingly shows how easy it is to hide the truth from others by getting them to think that a series of similar events—in this instance, a series of murders—is happening because, when taken in sequence, they appear to add up to a coded message that we are being taunted to decipher.
Judging by appearances, each murder apparently symbolizes one of the logical steps in a predictable sequence, just as most of us would probably agree that the next logical number in the familiar series 2, 4, 8, and 16 must be the multiple 32. Perhaps, but as the philosopher Ludwig Wittgenstein famously observed, any finite sequence of numbers can be continued in a variety of different ways, not just in the one way that may seem reasonable (Biletzki and Matar 2006).
For example, the narrator, whose name we are never told, is asked early in this novel if he can figure out what is the next symbol in the odd series reproduced here as Fig. 1a.
Although Martínez never shows us the solution he has in mind (the narrator merely tells us later on that the answer is the number series 1, 2, 3, 4), we suspect those who find riddles like this one appealing are likely to say the solution shown in Fig. 1b is the right resolve: an answer derived from the rules of symmetry (Fig. 1c). Yet in keeping with Martínez’s revealing observations about both logic and magic set here and there in this story, what if the proper solution is not so playful?
For example, what if the three symbols already revealed follow instead the alternative rule that one stroke equals 1? If this were so, then the missing fourth symbol in this cryptic series would not be an “M” with a bar drawn horizontally through it (in keeping with our different rule, this strange symbol could stand instead for the number 5), but disconcertingly could be drawn either as a single stroke (Fig. 1d), or possibly as an inscribed circle, the letter “O,” or a zero (Fig. 1e).
Doubt as to the proper resolve of Martínez’s series of symbols illustrates Wittgenstein’s cryptic and oft-quoted remark: “This was our paradox: no course of action could be determined by a rule, because every course of action can be made out to accord with the rule. The answer was: if everything can be made out to accord with the rule, then it can also be made out to conflict with it. And so there would be neither accord nor conflict” (quoted in: Biletzki and Matar 2006).
I am not a philosopher, nor a novelist. It seems to me, however, that Martinez’s tale and Wittgenstein’s remark both tell us something about ourselves, about how we are given to looking for similarities among things and events proving that what we are seeing makes sense not by chance but necessity. It might even be argued that human beings are strongly predisposed to equate similarity with necessity.
This is why we need statisticians, however much statistics may sometimes seem only a cultivated way of lying for effect. They keep us from foolishly jumping to the conclusion that similarities in appearance or similarities in effect are necessarily similarities of cause.
And in this regard, we need to remember that when statisticians say that something should be attributed to “chance,” they do not mean “without cause.” Far from it: the point they are making is that the cause (or causes) is not necessarily the one we think it is.
Note: These observations were originally published as the introduction in my chapter "Return to the entangled bank: Deciphering the Lapita cultural series" in Sheppard, P. J., Thomas, T., and Summerhayes, G. R., eds., Lapita: Ancestors and Descendants, pages 255-269. Monograph 28. New Zealand Archaeological Association, Auckland, 2009.
Biletzki, Anat and Matar, Anat, “Ludwig Wittgenstein”, The Stanford Encyclopedia of Philosophy (Spring 2014 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/spr2014/entries/wittgenstein/>.
Please note: this commentary, recovered on 9-Jan-2017, was originally published in Science Dialogues on 16-April-2015.
“The first thing in a visit is to say ‘How d’ye do?’ and shake hands!” And here the two brothers gave each other a hug, and then they held out the two hands that were free, to shake hands with her.
Alice did not like shaking hands with either of them first, for fear of hurting the other one’s feelings; so, as the best way out of the difficulty, she took hold of both hands at once . . .
Through the Looking-Glass by Lewis Carroll, 1871
AT THE HANDS OF SOMEONE like William Shakespeare or Virginia Woolf, humans may come off sounding complex, cantankerous, and downright mean at times, but often and also kind, noble, loving, and at least momentarily wise and intelligent. On the other hand, portrayals of our species in the reckonings of science are often far more one-sided and two-dimensional. Thus according to the zoologist Edward O. Wilson (2012) we are a tribal eusocial species committed to killing outsiders for the good of our home group. The evolutionary psychologist Steven Pinker (2011a) maintains that we all have in effect if not in fact violent demons lurking within us that must be tamed by reason, compassion, and good governance. The social scientists Samuel Bowles and Herbert Gintis (2011) have expressed a more favorable view of human nature, but again like Wilson and Pinker, they have described our willingness to cooperate with one another as an evolutionary mystery in need of resolution given that humans are selfish at heart and can be self-serving in their motivations.
The most parsimonious proximal explanation of cooperation, one that is supported by extensive experimental and other evidence, is that people gain pleasure from or feel morally obligated to cooperate with like-minded people. People also enjoy punishing those who exploit the cooperation of others, or feel morally obligated to do so. (Bowles and Gintis 2011: 3)
There are two major assumptions at the ground level of most current scientific analyses of human nature. The first is that selfishness is one of the prime movers of biological evolution. The second is the claim that human cooperation is based on reason, shame, and good gamesmanship. “The most important psychological contributor to the decline of violence over the long term may instead be reason: the cognitive faculties, honed by the exchange of ideas through language, that allow us to understand the world and negotiate social arrangements” (Pinker 2011b: 310). Both of these assumptions are questionable.
The fundamentals of evolutionary thinking as a way of explaining what we are seeing in the world of today and in the past have changed over time since Darwin’s day (Amundson 2014). Tom Clark has shown in his series of commentaries at SCIENCE DIALOGUES on Darwin’s use of “use and disuse” that during the latter half of the 20th century, the neo-Darwinian assumption that genes and environments were sufficient causes of observed behavior “turned natural selection from an animate doing into a physical happening. Attributing behavior to stable causes both inside (molecules) and outside (environment) turned animals into spectators, along for the ride.”
Clark underscores that how we tell our story of what it means to be human and how we have evolved to be the sort of animal we are directly leverages or constrains how well we handle our individual and collective impacts on the earth and our fellow human beings.
As Michael Ruse (2014) has observed, today natural selection is the mechanism seen by most experts on evolution as the chief reason for organic change. It is perplexing, however, that when it comes to our species, attempts to explain our general willingness to cooperate with one another often take it as self-evident that selfishness, infra-specific competition, and gamesmanship (Potter 1947; Rand et al. 2013) rule the day even when we seem to be acting in kind, considerate, and evidently caring ways towards others (Terrell 2015: 111–117).
Such scientific cynicism may make perfect sense given the ruling assumptions of neo-Darwinian theory today, but the picture looks quite different if it isn’t accepted from the get-go that selfishness has to be a part of every permissible Darwinian explanation for life’s diversity and history on earth.
Social baseline theory
The psychologists Lane Beckes and his colleague James Coan are studying empathy and cooperation based on a radically different view of what it means to be human, a research tactic they call social baseline theory (Beckes and Coan 2011). Their working assumption is one that many would accept with little disagreement: being a social animal gives any species a genuine and practical advantage in the Darwinian struggle for survival and reproduction. And for humans at least, having the capacity to live and work closely with others also gives us a social baseline of emotional support and security. So much so, they say, that our social ties with other people are in effect an extension of the way the human brain interacts with the world. As a consequence, when we are around others we know and trust, we can let down our guard and relax.
From this perspective, the experienced payoffs are more than emotional. When we thus feel safe and secure, we are literally able to devote less energy—and we would add, less time—to staying alert for possible threats and uncertainties. Indeed, they have argued that the human brain has evolved to assume the presence of other people. In their words: “In our view, the human brain is designed to assume that it is embedded within a relatively predictable social network characterized by familiarity, joint attention, shared goals, and interdependence.”
On the other side of the mirror
Beckes and Coan have said a major saving grace of human sociality is the energetic cost benefit of not having to be the only one looking out for number one (Beckes and Coan 2011; Coan and Maresh 2014; Coan and Sbarra 2015). While we would grant that there may be be such a cost benefit, we are uncertain how decisive this savings has been in shaping human evolution. After all, the probability of survival is determined not only by how much effort you have to put into the struggle. It can be argued that we are such strongly social animals for other reasons, too. First, we critically depend on social learning to know how to survive in the first place. Second, many of us—but admittedly not all—are predisposed socially and emotionally to be caregivers because our offspring wouldn’t survive the first years of their lives if we weren’t (Terrell 2015: 190–191).
To survive and reproduce, organisms must take in more energy than they expend, a principle of behavioral ecology called economy of action. Social baseline theory (SBT), a framework based on this principle, organizes decades of observed links between social relationships, health, and well-being, in order to understand how humans utilize each other as resources to optimize individual energy expenditures. (Coan and Maresh 2014: 221).
Furthermore, there is the matter of time. It may be true that time is money, but we humans are pretty good at wasting time for apparently no good reason, energetic or otherwise. And certainly there is no denying that when we feel safe and secure, many of us are willing to invest both time and energy in seemingly unproductive ways.
Consider, for example, the metabolic cost of the continuing mental activity in what has been dubbed the brain’s default mode network (DMN) when we are not task-engaged. The reward of not having to attend closely to the practicalities of the world around us when we feel safely embedded in nurturing social networks may be the excitement Alice must have felt in Lewis Carroll’s story after she had slipped through the looking-glass to explore the hidden wonders to be found therein (although judging by his singular account, Alice evidently did not find doing so as addictive as some today find the similar cognitive experience of playing online computer games). Just as those incarcerated in our penal system may be given time off for good behavior, so too, sharing the demands and burdens of life with others gives us time off to play with whatever takes our fancy on that landscape between our ears.
Amundson, Ron (2014). Charles Darwin’s reputation: How it changed during the twentieth-century and how it may change again. Endeavour 38: 257–267.
Beckes, Lane and James A. Coan (2011). Social baseline theory: The role of social proximity in emotion and economy of action. Social and Personality Psychology Compass 5: 976–988.
Bowles, Samuel and Herbert Gintis (2011). A Cooperative Species: Human Reciprocity and Its Evolution. Princeton: Princeton University Press.
Coan, James A. and Erin L. Maresh (2014). Social baseline theory and the social regulation of emotion, pages 221–236. In J. Gross, ed., The Handbook of Emotion Regulation, 2nd. ed., pp. 221–236. New York: Guilford Press.
Coan, James A. and David A. Sbarra (2015). Social baseline theory: The social regulation of risk and effort. Current Opinion in Psychology 1: 87–91.
Pinker, Steven (2011a). The Better Angels of Our Nature: The Decline of Violence in History and its Causes. New York: Viking.
Pinker, Steven (2011b). Taming the devil within us. Nature 478: 309–311,
Potter, Stephen (1947). Theory and Practice of Gamesmanship. New York: Henry Holt & Company.
Rand, David G., Corina E. Tarnita, Hisashi Ohtsuki, and Martin A. Nowak (2013). Evolution of fairness in the one-shot anonymous Ultimatum Game. Proceedings of the National Academy of Sciences U.S.A. 110: 2581–2586.
Ruse, Michael (2014). Was there a Darwinian revolution? Yes, no, and maybe! Endeavour 38: 159–168.
Terrell, John Edward (2015). A Talent for Friendship: Rediscovery of a Remarkable Trait. Oxford and New York: Oxford University Press.
Wilson, Edward O. (2012). The Social Conquest of the Earth. New York: Liveright (a division of W. W. Norton).
Please note: this commentary, recovered on 9-Jan-2017, was originally published in Science Dialogues on 24-March-2015.
THE BEHAVIORIST B. F. SKINNER was famously opposed to “mentalistic explanations” for human behavior. By this he meant attributing to the world of the mind an active “top-down” role (Baumeister and Miller 2014) in determining what we think, say, and do. In his eyes, trying to explain our overt behavior by appealing to inner states of mind, feelings, and other elements of an “autonomous man” inside our skulls was utterly foolish, unscientific, and a waste of time. “The ease with which mentalistic explanations can be invented on the spot is perhaps the best gauge of how little attention we should pay to them” (Skinner 1971: 160).
Instead, according to Skinner, the “task of a scientific analysis is to explain how the behavior of a person as a physical system is related to the conditions under which the human species evolved and the conditions under which the individual lives” (1971: 14). As distasteful as some might find such a realization, “the fact remains that it is the environment which acts upon the perceiving person, not the perceiving person who acts upon the environment” (1971: 188).
Even Skinner was willing to concede the “indisputable fact of privacy.” Nonetheless he stuck to his staunch environmentalism. “It is always the environment which builds the behavior with which problems are solved, even when the problems are to be found in the private world inside the skin” (1971: 194).
In a scathing review of Skinner’s 1971 book Beyond Freedom and Dignity, the linguist Noam Chomsky thoroughly rejected Skinner’s scientific claims. “His speculations are devoid of scientific content and do not even hint at general outlines of a possible science of human behavior. Furthermore, Skinner imposes certain arbitrary limitations on scientific research which virtually guarantee continued failure” (Chomsky 1971).
Unfortunately Chomsky’s spirited defense of human freedom and dignity against Skinner’s denial of both offered few concrete hints on why we are not the automatons Skinner said we are. But how are we not controlled by the world around us and by all that life deals us, both painful and pleasurable? How and how much does Skinner’s nemesis “autonomous man” have any real say in what we think, feel, and do? Chomsky left these critical issues unexplored and undocumented.
The mind-body problem
The philosopher Jerry Fodor noted in 1980 that traditional philosophies of mind can be divided into two sorts: dualist theories and materialist theories. “In the dualist approach the mind is a nonphysical substance. In materialist theories the mental is not distinct from the physical; indeed, all mental states, properties, processes and operations are in principle identical with physical states, properties, processes and operations” (Fodor 1980: 114). Since then cognitive psychologists and experts in neuroscience imaging have come down more or less firmly on the side of materialist theories, although exactly how the neurological hardware and software called the brain processes information and arrives at conclusions remains more an educated guess than a demonstrated reality.
Awkwardly what has traditionally been called the “mind-body problem” has often been seen in both science and philosophy as a conundrum about the consciousness of our thoughts and decisions. Yet as Max Velmans (2008) has observed, “it is now clear that ‘mind’ is not quite the same thing as ‘consciousness,’ and that the aspect of body most closely involved with consciousness is the brain. It is also clear that there is not one consciousness–brain problem, but many.” In other words, reading “mind and body” to mean “consciousness and brain tissue” is far too restrictive, too limiting.
Recently Ralph Adolphs (2015) at the California Institute of Technology surveyed what we do and don’t know about consciousness as a mental phenomenon and finds that there is little agreement about what it is and how it works. He helpfully divides the unsolved problems in neuroscience into four basic categories ranging from those that are now solved or will soon be to those that may never be decided. Discouragingly, he puts three key issues in the latter category. (1) How does the human brain compute? (2) How can cognition be so flexible and generative? (3) How and why does conscious experience arise?
His final conclusion is equally sobering. “In a nutshell, then, the biggest unsolved problem is how the brain generates the mind, conceived of in a way that does not simultaneously require answering the problem of consciousness.” However, on a more promising note, he adopts the framework proposed by David Marr (1982) to suggest that memory at least can be understood as the “ability to predict the future by learning.”
This comment is worth emphasizing. Unlike old Father William in Lewis Carroll’s famous poem who elected to stand on his head again and again after learning he had no brain, we see the design and decision-making that are both so fundamental to human niche construction as tangible proof that the human brain is capable of stimulus-independent, self-directed thought (Bonn 2013)—a roundabout way of saying that like Father William, the cognitive manipulations and innovations happening in our minds can lead to top-down, not just bottom-up causation (Foulkes and Domhoff 2014).
Evidence favoring this admittedly far from surprising conclusion can be seen readily enough in what happens on the landscape between our ears during that mysterious cognitive activity called dreaming.
Dreams and dreaming
It is an enduring folk belief that we live our lives on-again off-again in dichotomous ways. We are either happy or sad, awake or asleep, conscious or unconscious, rational or emotional, and so on.
Cognitive psychology today, however, is discovering that a great deal that is happening in the brain instrumental to our survival, success, and emotional well-being is (1) largely disengaged from our conscious awareness of what’s going on both inside and outside us (e.g., Mudrik et al. 2014; Soto and Silvanto 2014), and is (2) more dependent on our feelings and emotions than conventionally seen (e.g., Inzlicht et al. 2015).
Dreaming, like consciousness, is one of those arenas of mental life about which much has been written and yet much remains to be understood (Domhoff and Fox 2015). Here we offer two observations. First, dreaming is more a top-down brain activity than generally envisioned (Foulkes and Domhoff 2014). Second, nobody who has ever recalled a dream needs to be told by anyone else that our brains are capable of creating often credible but truly off-the-wall situations, scenarios, and storied experiences that may not only have lingering emotional impact long after awakening, but can also be a source of great inspiration and creative insight. In short, cognitive niche construction does not need to be either conscious or wakeful.
Saying you know for sure what free will is or isn’t has long been a reliable way of provoking debate (Monroe et al. 2014). Nonetheless, here are three claims based on what we have been discussing thus far in this SCIENCE DIALOGUES series. First, human beings can think about things and actions—past, present, or future—without being aware that they are doing so (Bonn 2013). Second, human beings can act in accord with the worlds they construct for themselves in their Leslie minds. Third, free will does not have to be rational if by rational we mean “makes sense” in terms of the external world and the laws of physics, etc. Cognitive niche construction may begin with our own experiences of the world, but it does not have to end there. And as we shall discuss in later commentaries in this series, therein lies a problem.
Adolphs, Ralph (2015). The unsolved problems of neuroscience. Trends in Cognitive Science, in press.
Buschman, Timothy J. and Earl K. Miller (2014). Goal-direction and top-down control. Philosophical Transactions of the Royal Society B: Biological Sciences 369: 20130471 http://dx.doi.org/10.1098/rstb.2013.0471.
Bonn, Gregory B. (2013). Re-conceptualizing free will for the 21st century: Acting independently with a limited role for consciousness. Frontiers in Psychology 4: 920. doi: 10.3389/fpsyg.2013.00920
Chomsky, N. (1971). The case against B. F. Skinner. The New York Review of Books 17: 18-24.
Domhoff, G. William and Kieran C. R. Fox (2015). Dreaming and the default network: A review, synthesis, and counterintuitive research proposal. Consciousness and Cognition 33: 342–353.
Fodor, Jerry (1880). The mind-body problem. Scientific American244/1: 114–123.
Foulkes, David and G. William Domhoff (2014). Bottom-up or top-down in dream neuroscience? A top-down critique of two bottom-up studies. Consciousness and Cognition 27: 168–171.
Inzlicht, Michael, Bruce D. Bartholow, and Jacob B. Hirsh (2015). Emotional foundations of cognitive control. Trends in CognitiveScience 19: 126–132.
Marr, David (1982). 9 Marr, D. (1982) Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W. H. Freeman.
Monroe, A. E., Dillon, K. D., and Malle, B. F. (2104). Bringing free will down to earth: People’s psychological concept of free will and its role in moral judgment. Consciousness and Cognition 27: 100–108.
Mudrik, Liad, Nathan Faivre, and Christof Koch (2014). Information integration without awareness. Trends in Cognitive Sciences 18: 488–496.
Skinner, B. F. (1971). Beyond Freedom and Dignity. New York: Alfred A. Knopf, 1972.
Soto, David and Juha Silvanto (2014). Reappraising the relationship between working memory and conscious awareness. Trends in Cognitive Sciences 18: 520–525.
Velmans, Max (2008). How to separate conceptual issues from empirical ones in the study of consciousness. In R. Banerjee and B. K. Chakrabarti (eds.), Models of Brain and Mind: Physical, Computational and Psychological Approaches 168: 1–9.
We thank Tom Clark and Kevin Kelly for their comments and suggestions for improvement.
Please note: this commentary, recovered on 9-Jan-2017, was originally published in Science Dialogues on 5-March-2015.
“Oh, Kitty! how nice it would be if we could only get through into Looking-glass House! I’m sure it’s got, oh! such beautiful things in it! Let’s pretend there’s a way of getting through into it, somehow, Kitty. Let’s pretend the glass has got all soft like gauze, so that we can get through. Why, it’s turning into a sort of mist now, I declare! It’ll be easy enough to get through—” She was up on the chimney-piece while she said this, though she hardly knew how she had got there. And certainly the glass was beginning to melt away, just like a bright silvery mist.
Through the Looking-Glass by Lewis Carroll, 1871
ALICE’S ADVENTURES IN WONDERLAND (first published in 1865) is Lewis Carroll’s most beloved book thanks in part to Walt Disney Studios and its 1951 cartoon version that beautifully captured the logical nonsense of Carroll’s rich fantasy world of talking rabbits, smiling cats, and unlikely occurrences. The Disney cartoon, however, also incorporated a few of the characters and events from Carroll’s sequel Through the Looking-Glass and What Alice Found there (1871).
While both books exhibit his brilliance at cognitive niche construction, Carroll’s framing his second story around the otherworldly semblance of reality seen in a looking-glass may have been inspired by his own reflections on the elusiveness of human thought. He was a mathematician first and foremost. He knew well that however much our thoughts may mirror the world around us and what we experience from the cradle to the grave, each of us lives in a cognitive world populated by our own private thoughts on the “other side of the looking-glass”—a world that, unlike Alice, others cannot enter and explore.
The human conundrum
There is nothing surprising about saying we can draw a line between our public lives and our private thoughts. Look into the eyes of any dog. There is also nothing remarkable about saying we are evidently not the only species capable of entertaining private thoughts and passions. As one of us has explored more fully elsewhere (Terrell 2015), our evolved human capacity to engage in cognitive niche construction—however remarkable or shared with at least some other species—brings with it costs as well as benefits. Socially we have evolved as a species to both want and need human contact and engagement. Yet during the evolution of our huge human brain we achieved a level of private cognition that enables us to disengage from the world around us. Hence as a species we are confronted with a conundrum. We are social creatures with private thoughts “on the other side of the mirror” that can isolate us from others.
THE DYNAMIC INTERPLAY OF HUMAN EVOLUTION
Over the course of human evolution there has been a dynamic interplay between our mental & physical abilities, our brains, our social behavior, and our cleverness at niche construction that has also nurtured our skillfulness as a species at cognitive niche construction.
Lou, Laurence, and Leslie
Modeling how our minds work has taken many twists and turns over the course of human history. Some of the more extreme recent interpretations have insisted either that the brain is massively modular in its aptitudes (Steven Pinker and other evolutionary psychologists), or alternatively is passively shaped, or sculpted, by our interactions with the world around us (classical 20th century stimulus-response psychology).
At this stage in our investigations, we prefer to remain agnostic about how the brain’s circuitry gives us the capacity for thinking as seen in all its many dimensions, public and private (Adolphs 2015; Lamme 2006). In keeping with Daniel Kahneman’s (2011) wisdom to use such things as tools for thought rather than as literal descriptions of our cerebral hardware, we find it useful to characterize how we think about things and events in three different ways using the labels Lou, Laurence, and Leslie (for discussion, see: Terrell 2015: chapter 4):
Lou (also known as System 1 or Type 1)—thinking that is unconscious, automatic, quick, perhaps emotional, and easy to do; in short, information processing in the brain done mostly without conscious awareness; a type of thinking that may be evolutionarily old and is probably also within the mental capabilities of other animal species; the realm of our habitual selves.
Laurence (called System 2 or Type 2)—thinking that is conscious, slow, takes effort, and is purposeful; usually said to be involved in “higher-order” cognitive processes such as logical reasoning and decision-making; may or may not be unique to our species; the realm of intentional environmental niche construction.
Leslie—thinking that is contemplative, abstract, may be counterfactual, and is largely detached from an individual’s immediate realities; may or may not be unique to our species; the realm of cognitive niche construction
As Jessica Andrews-Hanna and her colleagues observed recently, understanding the mechanisms underlying self-generated thought and its adaptive and maladaptive functional outcomes has been a key aim of cognitive science in recent years (Andrews-Hanna et al. 2014: 29). In their estimation, far from being a passive brain phenomenon, for example, the default mode network (DMN) within our skulls contributes to several active forms of internally driven cognition. As she and her colleagues have written:
Tasks that activate the network often require participants to retrieve episodic, autobiographical, or semantic information, think about or plan aspects of their personal future, imagine novel scenes, infer the mental states of other people, reason about moral dilemmas or other scenarios, comprehend narratives, self-reflect, reference information to one’s self, appraise or reappraise emotional information, and so on. (Andrews-Hanna et al. 2014: 32)
Although much remains to be learned about the costs and benefits of self-generated thought—which has also been dubbed stimulus-independent thought, spontaneous thought, internally-directed thought, and mind-wandering—it is becoming increasingly clear that the default and executive networks in the brain are not inherently working in opposition.
Kalina Christoff and her colleagues, as a case in point, have argued that both networks can work in parallel in ways that are reminiscent of the neural recruitment observed during creative thinking before solving problems with insight. Furthermore, “similar parallel recruitment of executive and default regions has also been observed during naturalistic film viewing, which is related to immersive simulative mental experience” (Christoff et al. 2009: 8723).
What we find both intriguing and frustrating is that many researchers studying self-generated thought, with notable exceptions (Killingsworth and Gilbert 2010), seem committed to the view that internally-directed thought lies within the reach of human cognition because even when it appears to be getting us away from what we really ought to be doing to survive and make a living, mind-wandering may nonetheless “enable the parallel operation of diverse brain areas in the service of distal goals that extend beyond the current task” (Christoff et al. 2009: 8723).
Perhaps, but not necessarily so, as we shall discuss in the next commentary in this series.
Andrews‐Hanna, Jessica R., Jonathan Smallwood, and R. Nathan Spreng (2014). The default network and self‐generated thought: Component processes, dynamic control, and clinical relevance. Annals of the New York Academy of Sciences 1316: 29–52.
Christoff, Kalina, Alan M. Gordon, Jonathan Smallwood, Rachelle Smith, and Jonathan W. Schooler (2009). Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proc. Natl. Acad. Sci. U.S.A. 106: 8719–8724.
Kahneman, Daniel (2011). Thinking: Fast and Slow. New York: Farrar, Straus and Giroux.
Killingsworth, Matthew A., and Daniel T. Gilbert (2010). A wandering mind Is an unhappy mind. Science 330: 932.
Lamme, Victor A. F. (2006). Towards a true neural stance on
consciousness. Trends in Cognitive Sciences 10: 494–501.
Terrell, John Edward (2015). A Talent for Friendship: Rediscovery of a Remarkable Trait. Oxford and New York: Oxford University Press.
Please note: this commentary, recovered on 9-Jan-2017, was originally published in Science Dialogues on 22-Jan-2015.
“Can we state more distinctly still the manner in which the mental life seems to intervene between impressions made from without upon the body, and reactions of the body upon the outer world again?”
William James, The Principles of Psychology, 1890: 6
THE NEUROLOGIST MARCUS RAICHLE HAS remarked that studies of brain function have traditionally focused on task-evoked responses (Raichle 2010, 2015). As Daniel Kahneman has explained, such research has contributed the useful convention that there are two modes of thinking—two systems in the mind, System 1 (or Type 1) and System 2 (or Type 2). In Kahneman’s words (2011: 20–21):
System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
System 2 allocates attention in the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
Although such conventions are useful, Raichle argues that focusing on task-evoked responses “ignores the alternative possibility that brain functions are mainly intrinsic, involving information processing for interpreting, responding to and predicting environmental demands” (2010: 180).
As he says, it is not difficult to see why so much attention has been given to monitoring neural responses to carefully designed tasks that can be rigorously controlled: “evaluating the behavioral relevance of intrinsic activity (i.e. ongoing neural and metabolic activity which is not directly associated with subjects’ performance of a task) can be an elusive enterprise” (2010: 180).
While it could be argued that intrinsic brain tasks are part and parcel of System 2 thinking, I believe it may be more constructive to infer instead that there is a third mode of thinking—one that I have suggested may be called cognitive niche construction (Terrell 2015: 29–32, 168–172)—a way of thinking that may strongly engage the brain’s default-mode network.
As Raichle (2015) and Robert Spunt and his colleagues (in press) have underscored, there is considerable metabolic cost to running the human brain when it is engaged in ongoing internal activity. As the latter researchers observe: “most of the brain’s energy budget is consumed not by activity evoked by specific cognitive tasks (e.g., mental arithmetic) but by spontaneous ongoing activity that is most notable when the brain is at rest.”
Given the metabolic cost of this ongoing internal activity in what has been dubbed the brain’s default mode network (DMN) when we are not task-engaged, an obvious question arises. How can we afford such stimulus-independent activity?
Raiche, Spunt et al., and others stress the likelihood that such inner-directed brain activity must be somehow adaptive in a realistic Darwinian sense, i.e., this inner activity must be “functionally consequential for the execution of stimulus-dependent mental state inferences” (Spunt et al. in press). This inference is plausible, but arguably not sufficient.
How we are able to remake the world around us when we put our minds and backs to the effort has been called niche construction (Odling-Smee et al. 2003). In the biological sciences, the word “niche” means “way of life,” and every species is said to have its particular place, or niche, in the economy of life. We are just one of a number of species that excel at making and remaking their way of life, their place in the grand scheme of things, their ecological niche. Similarly, I have argued that even when it may look as if we are day-dreaming, our minds actually may be hard at work engaged in cognitive niche construction—a way of using our brains that is possibly but not necessarily unique to our species (Terrell 2015).
Others recently have also written about cognitive niche construction, but what they evidently have in mind may be more clearly activity under the heading of System 2 thinking. Steven Pinker, for instance, has defined cognitive niche construction as “a mode of survival characterized by manipulating the environment through causal reasoning and social cooperation” (Pinker 2010: 8993).
Such a description glosses over how difficult it can be to apply what we envision in our mind’s eye to the realities of life. More to the point, such a definition does not confront the obvious weakness of cognitive niche construction at least as I have described it. What goes on between our ears when we are engaged in such mental activity does not have to be rational at all, at least not if by “rational” we mean thinking that makes practical sense in the real world outside our bodies.
By detaching from the realities of the moment and turning our mind to our inner thoughts, we are able to ponder what I like to call the “coulds & shoulds” of life. We can devote our mind to a kind of imaginary niche construction that does not even have to be “of this world” at all. We can see seemingly impossible things in our mind’s eye. We can engage in “what if” fantasies of remarkable, perhaps sexually charged, and even quite unrealistic complexity. We can invent imaginary worlds, invent new things, rewrite the story of our life to our heart’s content. All in the mind rather than in the real world.
In short, it seems likely we engage in cognitive niche construction not just for interpreting, responding to, and predicting environmental demands—to paraphrase what Raichle has previously said. As Spunt et al. observe: “Given that the DMN activity is metabolically costly, widely distributed in the cortex, and highly sensitive to both the presence and type of task demand, it should be no surprise that this network would have functional consequences in multiple domains” (Spunt et al., in press).
They themselves hypothesize that natural selection has favored the evolution of such a costly DMN in humans (and possibly also in chimpanzees and monkeys) so that we can more skillfully “see the world in terms of other minds” and live together socially—thereby gaining far more socially than would be likely by living separately.
While this is a plausible hypothesis, it is not the only one possible, as Gabriel Terrell and I will discuss in the forthcoming commentaries.
Editor’s note: This is the first in a series of eight commentaries at SCIENCE DIALOGUES on cognitive niche construction and its implications for psychology, philosophy, and the social sciences generally.