by Scott Bakker
“… when you are actually challenged to think of pre-Darwinian answers to the question ‘What is Man?’ ‘Is there a meaning to life?’ ‘What are we for?’, can you, as a matter of fact, think of any that are not now worthless except for their (considerable) historic interest? There is such a thing as being just plain wrong and that is what before 1859, all answers to those questions were.” (Richard Dawkins, The Selfish Gene, p. 267)
Biocentrism is dead for the same reason geocentrism is dead for the same reason all of our prescientific theories regarding nature are dead: our traditional assumptions simply could not withstand scientific scrutiny. All things being equal, we have no reason to think our nature will conform to our prescientific assumptions any more than any other nature has historically. Humans are prone to draw erroneous conclusions in the absence of information. In many cases, we find our stories more convincing the less information we possess! [1]. So it should come as no surprise that the sciences, which turn on the accumulation of information, would consistently overthrow traditional views. All things being equal, we should expect any scientific investigation of our nature will out and out contradict our traditional self-understanding.
Everything, of course, turns on all things being equal — and I mean everything. All of it, the kaleidoscopic sum of our traditional, discursive human self-understanding, rests on the human capacity to know the human absent science. As Jerry Fodor famously writes:
“if commonsense intentional psychology really were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species; if we’re that wrong about the mind, then that’s the wrongest we’ve ever been about anything. The collapse of the supernatural, for example, didn’t compare; theism never came close to being as intimately involved in our thought and practice — especially our practice — as belief/desire explanation is.” [2]
You could say the scientific overthrow of our traditional theoretical understanding of ourselves amounts to a kind of doomsday, the extinction of the humanity we have historically taken ourselves to be. Billions of “selves,” if not people, would die — at least for the purposes of theoretical knowledge!
For years now I’ve been exploring this “worst case scenario,” both in my novels and in my online screeds. After I realized the penury of the standard objections (and as a one-time Heideggerean and Wittgensteinian, I knew them all too well), I understood that such a semantic doomsday scenario was far from the self-refuting impossibility I had habitually assumed [3]. Fodor’s ‘greatest intellectual catastrophe’ was a live possibility — and a terrifying one at that. What had been a preposterous piece of scientistic nonsense suddenly became the most important problem I could imagine. Two general questions have hounded me ever since. The first was, What would a post–intentional future look like? What could it look like? The second was, Why the certainty? Why are we so convinced that we are the sole exception, the one domain that can be theoretically cognized absent the prostheses of science?
With reference to the first, I’ll say only that the field is quite lonely, a fact that regularly amazes me, but never surprises [4]. The second, however, has received quite a bit of attention, albeit yoked to concerns quite different from my own.
So given that humanity is just another facet of nature, why should we think science will do anything but demolish our traditional assumptions? Why are all things not equal when it comes to the domain of the human? The obvious answer is simply that we are that domain. As humans, we happen to be both the object and the subject of the domain at issue. We need not worry that cognitive science will overthrow our traditional self-understanding, because, as humans, we clearly possess a privileged epistemic relation to humans. We have an “inside track,” you could say.
The question I would like to explore here is simply, Do we? Do we possess a privileged epistemic relation to the human, or do we simply possess a distinct one? Being a human, after all, does not entail theoretical knowledge of the human. Our ancestors thrived in the absence of any explicit theoretical knowledge of themselves — luckily for us. Moreover, traditional theoretical knowledge of the human doesn’t really exhibit the virtues belonging to scientific theoretical knowledge. It doesn’t command consensus. It has no decisive practical consequences. Even where it seems to function practically, as in Law say, no one can agree how it operates, let alone just what is doing the operating. Think of the astonishing epistemic difference between mathematics and the philosophy of mathematics!
If anything, traditional theoretical knowledge of the human looks an awful lot like prescientific knowledge in other domains. Like something that isn’t knowledge at all.
Here’s a thought experiment. Try to recall “what it was like” before you began to ponder, to reflect, and most importantly, before you were exposed to the theoretical reflections of others. I’m sure we all have some dim memory of those days, back when our metacognitive capacities were exclusively tasked to practical matters. For the purposes of argument, let’s take this as a crude approximation of our base metacognitive capacity, a ballpark of what our ancestors could metacognize of their own nature before the birth of philosophy.
Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species. No Dasein. No qualia. No personality. No normativity. No agency. No intentionality. I’m not saying none of these things existed (at least not yet), only that we had yet to discern them via reflection. Certainly we used intentional terms, talked about desires and beliefs and so on, but this doesn’t entail any conscious, theoretical understanding of what desires and beliefs and so on were. Things were what they were. Scathing wit and sour looks silenced those who dared suggest otherwise.
So imagine this metacognitive dark, this place you once were, and where a good number of you, I am sure, believe your children, relatives, and students — especially your students — still dwell. I understand the reflex is to fill this cavity, clutter it with a lifetime of insight and learning, to think of the above as a list of discoveries (depending on your intentional persuasion, of course), but resist, recall the darkness of the room you once dwelt in, the room of you, back when you were theoretically opaque to yourself.
But of course, it never seemed “dark” back then, did it? Ignorance never does, so long as we remain ignorant of it. If anything, ignorance makes what little you do see appear to be so much more than it is. If you were like me, anyway, you assumed that you saw pretty much everything there was to see, reflection-wise. Since your blinkered self-view was all the view there was, the idea that it comprised a mere peephole had to be preposterous. Why else would the folk regard philosophy as obvious bunk (and philosophers as unlicensed lawyers), if not for the wretched poverty of their perspectives?
The Nobel Laureate Daniel Kahneman calls this effect “what you see is all there is,” or WYSIATI. As he explains:
“You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” [5]
The idea, basically, is that our cognitive systems often process information blind to the adequacy of that information. They run with what they get, present hopeless solutions as the only game in town. This is why our personal Square One, benighted as it seems now, seemed so bright back then, and why “darkness,” perhaps our most common metaphor for ignorance, needs to be qualified. Darkness actually provides information regarding the absence of information, and we had no such luxury as a child or as a species. We lacked access to any information tracking the lack of information: the “darkness” we had to overcome, in other words, was the darkness of neglect. Small wonder our ignorance has felt so enlightened at every turn! Only now, with the wisdom of post-secondary education, countless colloquia, and geriatric hindsight can we see how little of ourselves we could see back then.
But don’t be too quick to shake your head and chuckle at your youthful folly, because the problem of metacognitive neglect obtains as much in your dotage as in your prime. You agree that we suffered metacognitive neglect both as pre-theoretical individuals and species, and that this was why we failed to see how little we could see. This means 1) that you acknowledge the extreme nature of our native metacognitive incapacity, the limited and — at least in the short term — intractable character of the information nature has rendered available for reflection; and 2) that this incapacity applies to itself as much as to any other component of cognition. You acknowledge, in other words, the bare possibility that you remain stranded at Square One.
Thanks to WYSIATI, the dark room of self-understanding cannot but seem perpetually bright. Certainly it feels “different this time,” but given the reflexive nature of this presumption, the worry is that you have simply fallen into a more sophisticated version of the same trap. Perhaps you simply occupy a more complicated version of Square One, a cavity “filled with sound and fury,” but ultimately signifying nothing.
Raising the question, Have we shed any real theoretical light on the dark room of the human soul? Or does it just seem that way?
The question of metacognitive neglect has to stand among the most important questions any philosopher can ask, given that theoretical reflection comprises their bread and butter. This is even more the case now that we are beginning to tease apart the neurobiology of metacognition. The more we learn about our basic metacognitive capacities, the more heuristic, error-prone, and fractionate they become [6]. The issue is also central to the question of what the sciences will likely make of the human, posed above. If we haven’t shed any real traditional light on the human room, then it seems fair to say our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.
So how are we to know? How might we decide whether we, despite our manifest metacognitive incapacity, have groped our way beyond Square One, that the clouds of incompatible claims comprising our traditional theoretical knowledge of the human actually orbit something real? What discursive features should we look for?
Capable of commanding consensus can’t be one of them. This is the one big respect where traditional theoretical knowledge of the human fairly shouts Square One. Wherever you find intentional phenomena theorized, you find interminable controversy.
Practical efficacy has promise — this is where Fodor, for instance, plants his flag. But we need to be careful not to equivocate (as he does) the efficacy of various cognitive modes and the theoretical tales we advance to explain them. No one needs an explicit theory of rule-following to speak of rules. Everyone agrees that rules are needed, but no one can agree what rules are. If the efficacy belonging to the phenomena requiring explanation — the efficacy of intentional terms — attests to the efficacy of the theoretical posits conflated with them, then each and every brand of intentionalism would be a kind of auto-evidencing discourse. The efficacy of Square One intentional talk evidences only the efficacy of Square One intentional talk, not any given theory of that efficacy, most of which seem, quite notoriously, to have no decisive practical problem-solving power whatsoever. Though intentional vocabulary is clearly part of the human floor-plan, it is simply not the case that we’re “born mentalists.” We seem to be born spiritualists, if anything! [7]
Certainly a good number of traditional concepts have been operationalized in a wide variety of scientific contexts — things like “rationality,””representation,” “goal,” and so on — but they remain opaque, and continually worry the naturalistic credentials of the sciences relying on them. In the case of cognitive science, they have stymied all attempts to define the domain itself — cognition! And what’s more, given that no one is denying the functionality of intentional concepts (just our traditional accounts of them), the possibility of exaptation [8] should come as no surprise. Finding new ways to use old tools is what humans do. In fact, given Square One, we should expect to continually stumble across solutions we cannot decisively explain, much as we did as children.
Everything turns on understanding the heuristic nature of intentional cognition, how it has adapted to solve the behavior of astronomically complex systems (including itself) absent any detailed causal information. The apparent indispensability of its modes turns on the indispensability of heuristics more generally, the need to solve problems given limited access and resources. As heuristic, intentional cognition possesses what ecological rationalists call a “problem ecology,” a range of adaptive problems [9]. The indispensability of human intentional cognition (upon which Fodor also hangs his argument) turns on its ability to solve problems involving systems far too complex to be economically cognized in terms of cause and effect. It’s all we’ve got.
So we have to rely on cause-neglecting heuristics to navigate our world. Always. Everywhere. Surely these cause-neglecting heuristics are among the explananda of cognitive science. Since intentional concepts often figure in our use of these heuristics, they will be among the things cognitive science eventually explains. And then we will finally know what they are and how they function — we will know all the things that deliberative, theoretical metacognition neglects.
The question of whether some kind of explanation over and above this — famously, some explanation of intentional concepts in intentional terms — is required simply becomes a question of problem ecologies. Does intentional cognition itself lie within the problem ecology of intentional cognition? Can the nature of intentional concepts be cashed out in intentional terms?
The answer has to be no — obviously, one would think. Why? Because intentional cognition solves by neglecting what is actually going on! As the sciences show, it can be applied to various local problems in various technical problem ecologies, but only at the cost of a more global causal understanding. It helps us make some intuitive sense of cognition, allows us to push in certain directions along certain lines of research, but it can never tell us what cognition is simply because solving that problem requires the very information intentional cognition has evolved to do without. Intentional cognition, in other words, possesses ecological limits. Lacking any metacognitive awareness of those limits, we have the tendency to apply it to problems it simply cannot solve. Indeed, our chronic misapplication of intentional cognition to problem-ecologies that only causal cognition could genuinely solve is one of the biggest reasons why science has so reliably overthrown our traditional understanding of the world. The apocalyptic possibility raised here is that traditional philosophy turns on the serial misapplication of intentional cognition to itself, much as traditional religion, say, turns on the serial misapplication of intentional cognition to the world.
Of course intentional cognition is efficacious, but only given certain problem ecologies. This explains not only the local and limited nature of its posits in various scientific contexts, but why purely philosophical accounts of intentional cognition possess no decisive utility whatsoever. Despite its superficial appeal, then, practical efficacy exhibits discursive features entirely consistent with Square One (doomsday). So we need to look elsewhere for our redeeming discursive feature.
But where? Well, the most obvious place to look is to science. If our epistemic relation to ourselves is privileged as opposed to merely distinct, then you would think that cognitive science would be revealing as much, either vindicating our theoretical metacognitive acumen or, at the very least, trending in that direction. Unfortunately, precisely the opposite is the case. Memory is not veridical. The feeling of willing is inferential. Attention can be unconscious. The feeling of certainty has no reliable connection to rational warrant. We make informed guesses as to our motives. Innumerable biases afflict both automatic and deliberative cognitive processes. Perception is supervisory, and easily confounded in many surprising ways. And the list of counter-intuitive findings goes on and on. Cognitive science literally bellows Square One, and how could it not, when it’s tasked to discover everything we neglect, all those facts of ourselves that utterly escape metacognition. Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness” [10]. The sum of what we’re learning is the sum of what we’ve always been, only without knowing as much. Slowly, the blinds on the dark room of our theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional theoretical accounts.
As we should expect, given the scant and opportunistic nature of the information our forebears had to go on. To be human is to be perpetually perplexed by what is most intimate — the skeptics have been arguing as much since the birth of philosophy! But since they only had the idiom of philosophy to evidence their case, philosophers found it easy to be skeptical of their skepticism. Cognitive science, however, is building a far more perilous case.
So to round up: Traditional theoretical knowledge of the human simply does not command the kind of consensus we might expect from a genuinely privileged epistemic relationship. It seems to possess some practical efficacy, but no more than what we would expect from a distinct (i.e., heuristic) epistemic relationship. And so far, at least, the science continues to baffle and contradict our most profound metacognitive intuitions.
Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One? Some kind of gut feeling, perhaps? An experience at an old New England inn?
You tell me. I can remember what it was like listening to claims like those I’m advancing here. I remember the kind of intellectual incredulity they occasioned, the welling need to disabuse my interlocutor of what was so clearly an instance of “bad philosophy.” Alarmism! Scientism! Greedy reductionism! Incoherent blather! What about quus? I would cry. I often chuckle and shake my head now. Ah, Square One . . . What fools we were way back when. At least we were happy.
Scott Bakker has written eight novels translated into over dozen languages, including Neuropath, a dystopic meditation on the cultural impact of cognitive science, and the nihilistic epic fantasy series, The Prince of Nothing. He lives in London, Ontario with his wife and his daughter.
[1] A finding that arises out of the heuristics and biases research program spearheaded by Amos Tversky and Daniel Kahneman. Kahneman’s recent, Thinking, Fast and Slow provides a brilliant and engaging overview of that program. I return to Kahneman below.
[3] Using intentional concepts does not entail commitment to intentionalism, any more than using capital entails a commitment to capitalism. Tu quoque arguments simply beg the question, assume the truth of the very intentional assumptions under question to argue the incoherence of questioning them. If you define your explanation into the phenomena we’re attempting to explain, then alternative explanations will appear to beg your explanation to the extent the phenomena play some functional role in the process of explanation more generally. Despite the obvious circularity of this tactic, it remains the weapon of choice for great number of intentional philosophers.
[4] Another lonely traveller on this road is Stephen Turner, who also dares ponder the possibility of a post-intentional future, albeit in very different manner.
[5] Thinking, Fast and Slow, p. 201.
[6] See Stephen M. Fleming and Raymond J. Dolan, The neural basis of metacognitive ability.
[7] See Natalie A. Emmons and Deborah Kelemen, The Development of Children’s Prelife Reasoning.
[8] Exaptation.
[9] I urge anyone not familiar with the Adaptive Behaviour and Cognition Research Group to investigate their growing body of work on heuristics.
[10] Consciousness and the Brain, p. 79. For an extended consideration of the implications of the Global Neuronal Workspace Theory of Consciousness regarding this issue see, R. Scott Bakker, The Missing Half of the Global Neuronal Workspace.