Scott Adams and Philosophy Page 18
Experience Is Full of Theories
Not that you can get ahead of your judgment—even our experience is laden with theory. That’s one thing that Adams gets right: that two groups of people can be “watching two different movies on the same screen.” Your mind is constantly trying to get a good fit between its sensory input and its “movie.” You can see this at work in the visual illusion of the sketch that seems to change from a sketch of a rabbit to one of a duck and back again. The sketch fits both interpretations, so your visual system vacillates between the two. Make a small change to the sketch, and the illusion evaporates.
The same is true of your beliefs. The “movies” may flip, as with abandonment of the delusion that Trump was the next Hitler. Adams says this! So, what’s the point in calling people irrational? None.
Baby Scientist and Her Dog Defeat Moist Robot
We’re wired from birth both to jump to conclusions and to revise those conclusions. Babies actively make exploratory guesses about how the world works and then revise those guesses in the light of experience. Alison Gopnik’s research demonstrates that babies learn like scientists. Babies start with questions or problems, produce guesses, have these guesses refuted, replacing them with different and better guesses.
The human infant is thrust into the world already armed with rudimentary expectations or theories about the world and actively tries to impose them. But the world kicks back. Recalcitrant experiences then modify the expectations, and better expectations or theories then replace these. Alison Gopnik elaborates this view in her book, The Scientist in the Crib.
Adams has adopted the popular theory of human irrationality that became prominent at the end of the nineteenth century. Gustave Le Bon popularized the theory that the masses think in logically disconnected series of images and are moved by sheer affirmation, not logic.
Affirmation pure and simple, kept free of all reasoning and all proof, is one of the surest means of making an idea enter the mind of the crowds. (The Crowd, p. 77)
Le Bon was opposed to democracy. A few discerning scholars and intellectuals might be capable of rational deliberation, but not the masses, controlled by their emotions and biases. Adolf Hitler’s ideas on how to persuade the masses, expressed in Mein Kampf, are indebted to Le Bon.
Building on Le Bon, and trying to explain the popularity of fascism, Sergei Chakhotin later added Pavlov’s theory that we are just a complex set of reflexes. The socialist Chakhotin believed that the experience of fascism had shown how intellectuals could scientifically control the minds of the unintellectual and emotional masses. The visual propaganda of flags, symbols, monuments, and emotionally stirring rallies dominated the masses, and could be used scientifically by the left.
Here we see the background of Scott Adams’s concept of irrationality. In Pavlov’s theory, because a stimulus such as a bell has reliably preceded the receipt of food by a dog, eventually the dog starts to salivate before it receives food, on merely hearing the bell. The dog’s salivation has been “conditioned” to the bell. If the bell no longer precedes the food, the conditioned salivation reflex is extinguished. This is the pedigree of the so-called “Moist Robot Hypothesis.”
Pundits of the Moist Robot Hypothesis seem unaware that not only Pavlov’s theory but also the whole idea of reducing mental life to associations—the philosophy of associationism—is now refuted and defunct psychology. An extreme version is “the blank slate,” the theory that we enter the world with an empty mind. Less extreme views, such as Pavlov’s, allow us some innate tendencies specific to our species, but stimuli are supposed to determine the expression of these instincts. Not to worry, though! Adams allows you at least sex, love, and romance.
There are alternative approaches. One of them goes back to nineteenth-century psychologists such as Otto Selz of the Wurzburg school, who discovered that we don’t think solely in terms of images or associations, but instead of networks of “directed thoughts” and attempts to solve problems. Independently, Karl Popper came to a similar conclusion, that expectations precede experience, and he put forward the general schema: Problem 1 → Tentative Theory → Error elimnation → Problem 2. Experience enters the picture, not by forming associations or conditioned reflexes, but by modifying pre-existing expectations by surprise and disappointment.
Pavlov’s dogs and Gopnik’s babies are actively searching for regularities in their world, important for their goals and values, and they create trial expectations or theories. These are modified, and the baby and dog settle, for the time being, on an expectation of food and a new toy in the presence of the bell sound (or mother’s voice), enabling them to prepare for food (or play). When the bell (or voice) no longer presages food (or new toy), the expectation of food (or new toy) is refuted.
But the baby and the dog, busy exploring the world and attempting to answer their questions, often interrupt the attempt to “condition” them. If something novel happens, the baby stops smiling, and the dog no longer salivates, while they explore the novel event. Pavlov noticed this behavior in dogs but insisted on calling it a “reflex” anyway. The marketing guru Robert Cialdini, author of Pre-Suasion and whom Adams has admiringly nicknamed the “Godzilla” of influence, follows Pavlov in this when referring to the so-called “orienting reflex.” We’ve all experienced this “orienting reflex,” in TV ads with a barrage of shortcuts each shouting, “Look here!”
Thus, due to the innate bias of the human mind to understand the world, you end up producing a network of biases, intuitions, prejudices that frame your view of the world. These biases are your indispensable guide to life, though they are not infallible and you may occasionally modify them.
Surviving Criticism
Science makes this innate tendency more formal, systematic, and powerful. Prompted by a problem, the scientist formulates competing theoretical conjectures about the world and then performs carefully controlled experiments to undermine them by confronting them with refuting observations. These conjectures, such as Einstein’s idea that the speed of light is a constant, are leaps in the dark that solve the problem at hand. This is where I leave Gigerenzer’s approach behind on the beach finding a prettier shell than ordinary. His ecological rationality, brilliant though it is, is confined to Newton’s beach; any new knowledge, including new heuristics, is to be found in deep water. In this case there is nothing else for it: you have to take a plunge into the unknown.
Outside of science, where observational criticism may be hard to come by, the critic uses other standards against which to test the proposed idea, such as self-coherence and consistency with less problematic theories that do have observational tests. You can arrive at a true position, but only tentatively and without justification. The key is the invention of systematic methods to test our theories and even our very methods. This is the reputable method proposed by Karl Popper, critical rationalism. Of course, it’s a controversial philosophical view of how we gain knowledge, and I’m only dropping a hint about it. But it’s healthy to have some competition in ideas.
On the old conception of rationality, even criticism was defined as showing that a view lacks justification. But if justification is not feasible, then criticism has to be understood as confronting a position with a mismatch between it and some objective standard—one that can be publically tested systematically by some method. Instead of proving that our theory is true, we settle for showing that it has survived attempts to prove it wrong. It looks true, it approximates the truth better than anything else we know, and it might even be true.
Battle of the Biases
Critical rationalism provides the tools crucial to rooting out our unconscious biases: competing theories expressed in language plus public testability. When we compare and test different theories, views, doctrines, and ideologies, our unconscious presuppositions and biases become conscious, placed on the slab for dissection. But we do need the liberty to express our biases without hindrance from such things as PC-speak or ministries of fake news. W
e need a battle of the biases. I rather suspect that Adams would agree with this.
From this perspective, it is irrelevant where and how you came by your bias or prejudice. Whether you got it by consulting tealeaves, from a marijuana trip, bumping your head, hearing the speech of a charismatic leader at an emotionally charged rally while gazing at a beloved flag, having it repeated to you in a simple advertisement, or having your brain spontaneously produce it, the question is: can you correct it by critical argument? You can also see that simple assertion (“affirmation”) is completely legitimate. It is a conjecture.
My view puts biases in a radically different light. Take the anchor bias. When someone enters a negotiation with a figure (an anchor), even without a certificate of justification, it is still something you can work with, correcting it according to your own guesses and counterargument. You may even have to invent new standards or techniques for testing the hypotheses, to see if you can find a mismatch.
Emotion, the Badass of Biases
According to Adams, “When our feelings turn on, our sense of reason shuts off” (p. 45), and “People don’t change opinions about emotional topics just because some information proved their opinion to be nonsense. Humans aren’t wired that way” (Win Bigly, p. 61).
It’s common to divide our mental life into a reasoning or rational self, governed by logic and inference and an emotional self, ruled by instinctual or habitual emotion. When I’m admiring my national flag or getting excited watching my local football team, I’m driven by passion. When I’m striving to get a good grade in my SATs, I’m using reason. However, the Stoics, the ancient Greek philosophers who founded logic, refuted this view more than two thousand years ago.
The problem with the division between an emotional self and a rational self is that all emotion is thoughtful and all thought is emotional. Even the “unemotional” intellectual thoughts of the scientist are filled with feelings, perhaps feelings of curiosity and wonder. A woman walking down the street sees a man grab hold of a woman and violently throw her to one side. She is angry with the man and runs over to protest, only to find that the man was protecting the woman from tiles falling from a roof in the wind. (She interprets her experience through a theory, which is promptly refuted.) Her anger rapidly changes to admiration and relief. (Her old theory is replaced by a new one, creating new emotions.) And, she can’t help herself changing her emotions in this way.
According to Adams’s “filter” theory, this is not supposed to happen; she ought to continue berating or beating him senseless for his ungentlemanly behavior. But this is typical of emotions: what you feel partly depends on what you think, what you believe is fact. And belief is involuntary; you can’t change your beliefs by an act of will. And because you don’t choose your beliefs, you don’t decide how you feel. You can’t help re-checking your beliefs, moment by moment, even if your more fundamental values or goals remain more stable.
Oddly, Adams, I surmise, would agree with most of what I’ve said here. He even points out, in places, that biases may be useful as, for example, shortcuts in finding solutions. I can imagine him saying (though he hasn’t actually said it), “It’s better to have something to work with rather than nothing, better to have a biased mind than an empty mind.”
Standing on the seashore of your infinite ignorance, you may adopt a critical but kinder attitude to your follies and flaws, which, to a god, are but foibles of the finite. Not only are you free to find prettier shells than ordinary; given some courage you may even find some bizarre creatures out in the deep.
Breathing in the fresh sea breeze, you may also feel free to embrace the exciting thought that, while you may be moist, you’re no robot.
16
Why Scott Adams Is Stupid
DANIEL MIORI
In case the title didn’t give it away, the purpose of this chapter is to take a swipe at Scott Adams. If you’re a serious fan don’t worry, he doesn’t really care what’s written here. As this book goes to press, Wikipedia says his net worth is $70 million. This little chapter certainly won’t be putting a dent in that anytime soon. Additionally, by the Adams standard, that pot of money is proof that he’s much smarter than everyone else—experts in their fields, contributors to this book, readers of this book, everyone.
The statement “Scott Adams is a smart man” is probably correct, but other than the fact that he is sitting on a small mountain of cash what proof do we have? He was valedictorian of his high school graduating class, but with only thirty-eight others in that class, it isn’t as interesting as you might think. What we know is that he got better grades than thirty-eight other people his age. The same could be said for someone who graduated 462nd in a class of five hundred.
We know he has a master’s degree from The University of California at Berkeley, which sounds good till you realize it’s a Master’s in Business Administration (MBA), often considered the rhinestone of graduate degrees, shiny but pretty much worthless. An informal survey of university faculty conducted for this chapter suggests that while there may be a few genuinely brilliant candidates in MBA programs, the majority simply have money in their pockets and a desperate craving for official recognition. Universities, even schools with good reputations like Berkeley, don’t mind taking that money. It annoys the crap out of the faculty forced to teach the MBA classes, but it’s way better than bake sales as a fund raiser.
Having initiated the requisite sarcasm and poking of fun, the genuine purpose of this chapter is to look at Adams’s opinion of science and the scientific method, how he states those views, and to put that into a philosophical perspective. To keep this discussion manageable, it will be based on the following charitable and open-minded summary, which, in his own words, might sound like this:
•I’m smarter than the experts.
•I’m a cartoonist, not a moralist.
•If you disagree with me it’s because you aren’t as smart as I am.
•If you build a better factual case, you are hysterical and that isn’t what I said in the first place.
•If you argue that factual case better, then you misunderstand me entirely. I’m just a cartoonist and you lack a sense of humor, you don’t get that I’m just kidding around.
•And finally, I’m smarter than the experts.
As to how he states his views, basically he was the guy on the playground who would whisper, “Tommy’s mother might not be a whore, but all those guys who have sex with her leave money on her nightstand . . . I’m just sayin’.”
We will be looking at Adams’s—let’s call it—opinion on a few topics, but one which comes up often is the science of climate change. Note, however, this chapter will not seek to refute any position he may or may not take, that can easily be done with well referenced replies available on the internet, like Keith Pickering’s “A Detailed Reply to Scott Adams on Climate Science,” as well as easily accessible and very solid science, like the 2017 EPA report on climate change.
If you would like to break away quickly and read an example of his “writing,” the post on his website from September 11, 2017 entitled “When to Trust the Experts (Climate and Otherwise)” should do. Spoiler alert, it’s not about when to trust experts, it’s about when not to trust experts that advance theories he doesn’t like. If you don’t want to read the post it’s just as well, we will tumble along blissfully unencumbered by the facts, consistent with Adams’s example.
To substantiate all these admittedly juvenile opinions, we’ll review a sub-field of philosophy called epistemology; the philosophic concepts of empiricism, rationalism, and epistemic responsibility; and look at epistemic responsibility in research. Finally, as a bonus, we will unearth one of the great works of pseudo-science, The Basic Laws of Human Stupidity.
Epistemology
When we think of philosophers, most people picture a crazy old coot with poor hygiene who gives good life advice. The reality couldn’t be farther from the truth. Philosophers are a conventional looking and divers
e bunch who wash regularly and rarely, if ever, give good advice. It may seem like an arbitrary body of abstract ideas, but philosophy is a genuine attempt to better understand common human errors of belief and judgment. William James (1842–1910), a philosopher who is also regarded as the father of American psychology, called philosophy “a peculiarly stubborn effort to think clearly.”
Epistemology is a major sub-field of philosophy and is concerned with the nature of knowledge. Not the kind of casual knowledge we all form every day, but with the understanding of what is actual and provable. For our purpose, it will be the difference between knowing and knowing. To know something requires you have a good reason, or warrant, for that belief. Having a warrant means both that you actually believe a thing to be true—you aren’t just saying it to stir up trouble—and that you have proof that it is true. It’s as though the philosophy cops were pounding on your door shouting, “We believe you are engaging in untruths!” and you get to say “Yes, but do you have a warrant?” Unless they have proof you’re engaging in untruths, the philosophy cops have to stay outside. But, for the record, you might just as well open the door anyway, because philosophy cops are likely to break it down and kick your ass regardless.
Rationalism and Empiricism
Science and philosophy have more in common than you might think. Most science, like physics, math, and the logic that gave us computers, began as philosophical disciplines. Science has also had an impact on philosophy, affecting the way philosophers speak to each other. Two concepts in philosophy which describe how science works and how we gain knowledge are rationalism and empiricism.
Empiricism demands that we be certain of the accuracy of what we’re saying. Just because we think something and believe it, doesn’t mean that it’s true. A strict empiricist would say that the warrant for a belief is more important than whether that belief was useful in explaining what has happened or in predicting things that will happen. Without that precise understanding of a situation, you can guess the right answer for the wrong reasons.