Free Novel Read

Scott Adams and Philosophy Page 17


  Proponents of the classical view of work might reply that even this view requires too much of employers and employees. The costs are too high for employers to make jobs meaningful. Also, as Adams points out, we don’t want our physician or pilot to get creative to spice up her job. Finally, if an employee is willing to accept greater pay for lower safety standards or worse treatment, shouldn’t she be allowed to make that trade-off? In an open and free economy, employees are free to quit one job and take another.

  The Kantian can reply to each of these concerns. First, the Kantian model is a moderate one and hence the costs are not as high as would be found in the human fulfillment model. Following the moral minimum might even have the side effect of being profitable for companies. Adams has an economic theory that says that happiness creates money. He believes that employees should not focus on making money directly. They should focus on their own happiness and money will follow. Happy people work harder and better. Employers should focus on this too. A more casual dress and work environment that is standard in much of Silicon Valley is part of this approach as well as features such as sleep pods that allow for a rejuvenating nap.

  Second, doctors and pilots generally are not creative, but we want them to be when unique bad scenarios occur. Adams maintains that there is good and bad creativity. Squashing it all would get rid of the good as well bad. Third, it’s often difficult for employees to find another job. Economic conditions and family situations are a couple of the reasons that employees often cannot just jump to another more meaningful job. Finally, employees do need to be protected from bad decisions. The greater psychological salience of short term benefits might cloud their ability to consider longer term harms, and they are often simply unaware of potentially negative long term consequences. Unsafe work conditions could take years to cause the resultant harms, and the first generation will suffer the consequences before the word gets out. The Kantian view stresses that paying someone a good salary doesn’t mean that the person can be treated as a mere means.

  Respecting a person involves recognizing their free will and their ability to create a meaning life at work and beyond. These are courtesies generally missing in the world of Dilbert. The Pointy-Haired Boss and Wally treat each other terribly, and as Adams points out, this is probably because they don’t respect themselves. Treating others with respect seems to be a necessary condition for respecting yourself and living a meaningful life. When Boss says, “Alice you should act if you’re your own boss,” Alice replies, “Okay. My hair is pointy and I am confused. Suddenly, I have no respect for myself. Must golf now.” Boss says, “That’s so not funny.”

  Good Work

  So, what is good work? Good work respects people’s freedom and intelligence. Good work protects people’s capabilities. And good work pays a sufficient wage to allow for meaning outside of work. Work is essential for living a meaningful life. Adams reminds us in his books and in Dilbert how important it is to treat people with respect in the workplace. Through characters such as robots, animal humans, various bosses, vulnerable interns, Wally, Alice, and Dilbert himself, we’re reminded that all people deserve recognition and respect. Work must get done and orders must often be followed, but a moral minimum must be in place for trust and meaning to exist.

  Employees then should search for work that they love. They should seek work that inspires, and environments that promote their creativity and worth as intelligent human beings. They should do the opposite of Wally, while keeping his insightful eye on corporate bullshit. Ideally, Gini says that we need to find our labor of love. That is the work we do not because we are paid for it but because of the satisfaction it provides.

  Pay is necessary to live, and meaningful work is necessary for really living.

  V

  Golden Age, Ready or Not

  15

  Bias Is Rational!

  RAY SCOTT PERCIVAL

  What a piece of work is man! How noble in reason! How infinite in faculty! In form and moving how express and admirable! In action how like an angel! In apprehension how like a god!

  —Hamlet, Act II, scene 2

  Scott Adams feels that he must convince us that people are irrational, at least ninety percent of the time. People are under the influence of unconscious biases and prejudices. These biases are not reached by a process of intellectual justification, but are either installed by evolution, conditioned, or “spontaneously hallucinated.” He tells us that

  If we could accept that humans are fundamentally irrational, we could program ourselves for higher levels of happiness and productivity than we currently enjoy. (Posted on Adams’s Blog, June 10th 2010, in #General Nonsense)

  What, according to Adams, are the main biases?

  A good general rule is that people are more influenced by visual persuasion, emotion, repetition, and simplicity than they are by details and facts. (Win Bigly, p. 25)

  Adams’s view of biases is part of his Moist Robot Hypothesis: humans are living creatures without free will, determined by certain stimuli and a set of biases, not by truth or logic. Simplicity, repetitive messages, flags, monuments, and emotionally stirring rallies dominate the masses.

  But as we’ll see, the Moist Robot Hypothesis is wrong. The biases of visual propaganda, emotion, repetition, and simplicity are perfectly rational. You’ll also be relieved to know that you’re not a robot.

  The Science of Bias

  Adams is heavily influenced by the modern science of biases. The theory that people are fundamentally irrational has dominated Western thinking for over one hundred years, and it is continually restated afresh by popular writers. A recent major advocate of this popular theory is Daniel Kahneman.

  Kahneman is currently the king of bias research. His best-selling book Thinking, Fast and Slow, portrays people as unwittingly under the sway of biases, challenging the assumption often made by economists that people make rational choices. Kahnemann’s experimental results are fascinating. Kahnemann’s interpretation of these results is that we have two cognitive systems: an inaccurate, fast one (System 1), and a slow, effortful, more accurate, reflective one (System 2). Kahneman, most of the time, is careful in his presentation and is reserved about calling people “irrational,” but those who refer to his work are often less cautious.

  Typically, popularizers of bias research will set up an unrealistic, godlike idea of what it is to be rational, such as acting in the light of all relevant information, or the known optimal amount of data, or being perfectly logically coherent, or ignoring irrelevant information—and then celebrate how stupid we all are by contrast.

  For a flavor of this approach, take a look at some of the titles of the myriad books published in this vein: Predictably Irrational: The Hidden Forces that Shape Our Decisions (Ariely, 2009), Sway: The Irresistible Pull of Irrational Behavior (Brafman and Brafman, 2008); Kluge: The Haphazard Construction of the Human Mind (Marcus, 2008). Evidently people just love being told how idiotic they are by psychologists, philosophers, and journalists.

  Research into bias ought to be applauded. However, most bias researchers, in their conclusions, like to stress that we don’t arrive at our conclusions by reasoned argument from “relevant information” (System 2), but instead that we are under the unconscious influence of our biases and prejudices (System 1).

  One type of supposedly irrational bias is the so-called “anchor” phenomenon, sometimes famously utilized by Donald Trump. Someone enters a negotiation with a figure (any figure). The subsequent bargaining will typically gravitate toward that figure, even though no one has propounded a logically reasoned justification for it. The anchor is merely adduced. It may even be influenced by what the researchers call “irrelevant information.” For example, if a potential buyer of an expensive yacht is exposed to an earlier conversation on a completely different (irrelevant) topic to very high prices, that buyer will be disposed to accept a higher price for the yacht than they would otherwise. Typically, this is seen as showing that people are irrati
onal and closed to, or at least disproportionately insensitive to, argument. Adams shares this irrationalist view:

  An anchor is a thought that influences people toward a persuader’s preferred outcome. For example, a big opening demand in a negotiation will form a mental anchor that will bias negotiations toward that high offer. (Win Bigly, p. 27)

  Adams explains that this is due to a more general inclination of the mind to accept the first position it encounters. He also says that this then becomes almost impervious to argument:

  The human brain forms a bias for the things it hears first. If we accept the thing we hear first, it tends to harden into an irrational belief. And then it is difficult to dislodge. If your friends are reinforcing the idea too, it becomes hard as steel. (Win Bigly, p. 111)

  Biases Are Heuristics

  Not all psychologists are believers in the irrationality of biases. Gerd Gigerenzer, director of the Center for Adaptive Behavior and Cognition (ABC) at the Max Planck Institute for Human Development, is one of Kahneman’s main critics. Gigerenzer argues that biases are useful rules of thumb—heuristics. When confronted by an avalanche of information, a heuristic is a simple tool of thought that gets you to a solution faster and more efficiently than a more reflective, fuller calculation might. Sometimes less is more. A heuristic can be either conscious or unconscious.

  Contrary to Kahneman, Gigerenzer emphasizes that often your intuitive thoughts (System 1) are both faster and more accurate. Think of the problem faced by a baseball player trying to catch a ball. The full calculation of the trajectory of the ball from hit to its landing point, would outstretch the capacity of contemporary supercomputers. But baseball players don’t do it that way. To catch a ball that’s already in flight, they follow this rule of thumb: fix your gaze on the ball, start running and adjust your speed so that the angle of gaze remains constant. You can see that players aren’t calculating the full trajectory of the ball, which would include where the ball will land, because the players often run into walls in pursuit of the ball. Apparently, many players apply this heuristic unconsciously, while others are aware of it and can formulate it.

  Gigerenzer has discovered a plethora of these rules of thumb that we tailor to the right circumstances. Another bias is the so-called “recognition heuristic.” Suppose you’re asked which city has the larger population: Detroit or Milwaukee. Gigerenzer found that ignorance is an advantage here. He found that when he asked a class of American college students this question, forty percent got the answer right. But when he asked an equivalent class of German students, who knew next to nothing about these cities, the same question, all of them got the right answer: Detroit. The German students, undistracted by lots of details about the two cities, used the intuition: if you recognize the name of only one of two cities, that one will likely have the larger population.

  Biases can be adaptive to our circumstances. Gigerenzer calls this “ecological rationality.” Gigerenzer’s work strongly suggests that biases are things we can work with, despite their using much less than the theoretical maximum amount of information. It was accepted for a long time that there is always a trade-off between speed and accuracy of solution. That’s why Kahneman’s System 1 is often thought to be fast but inaccurate. However, as Gigerenzer has shown, you can get both speed and accuracy from a simple heuristic, an unconscious bias. In addition, when faced by a complex problem, often a simple approach is better: a complex problem does not always require a complex solution.

  Adams does, now and then, hint that we have to work with our biases, but he also, confusingly, asserts that they are irrational. Are we to surmise that perhaps this is Adams’s anchor maneuver on rationality: come in with the outrageous claim that we are all ninety-percent irrational, wait for the attention, then soften up on the claim?

  Rationality is not a matter of having no biases, nor is it a matter of arriving at your position by a process of justification. You are rational because your experience modifies your biases, or at least how you manage them. Far from being a hindrance, your biases and prejudices are vital to the process of improving your knowledge and adapting to your circumstances. Because biases are just a starting point, it doesn’t matter where or how you got them. Therefore, even if you start a negotiation for the price of a yacht with a higher price than you otherwise would have done because you happened to hear a conversation involving very high prices, this does not stop you adapting your bargaining as you learn more.

  Being Rational Encompasses Error

  But what, in general, is it to be rational? What is it to be irrational? Surprisingly, for someone who uses these terms so frequently, Adams gives us no general account of rationality or irrationality. I’m not going to monopolize these words. But just so that you know where I’m coming from, I’d suggest, as a start, that we use “rational” as short-hand for having a propensity to adapt what you more or less guess are your means to your goals, abandoning what you conjecture to be unfeasible or uneconomic. These guesses can be conscious or unconscious. Being irrational would be failing to adapt your means or ends in response to any amount of counterevidence.

  I think that people are rational in this sense. Excepting brain damage, genetic abnormalities, illness, and extreme physical obstacles, it’s simply a myth that there are people who are absolutely impervious to adaptation under all circumstances. (The issue is more fully explored in my book The Myth of the Closed Mind.)

  My sense of “rational” is closer to Gigerenzer’s than it is to Scott Adams’s or Daniel Kahneman’s conception. It is rationality for mortals, down to Earth. It does not require perfect coherence or stability of preferences (although these may be goals for some people) or selfishness (a person’s goals may be altruistic) or the absence of ignorance (perfect knowledge), or the absence of bias. However, unlike Gigerenzer, I lay fundamental stress on the conjectural nature of our grappling, stumbling attempts to master and understand the world, including the internal world of our own minds.

  Rationality can encompass ignorance, error, and logical incoherence because rationality is a propensity to improve. Therefore, focusing on how ignorant and incoherent people are at any given point may cause us to overlook their ability to develop and grow.

  In contrast, Adams implies that most of us are impervious to the facts, as his definition of anchoring suggests. There is a tradition of thought, of which Adams is part, which lays down impossible standards for rationality, and by these standards, it’s easy to come up with examples of irrationality. Bias research has in many instances been swept along by this tradition of thought.

  Knowing Newton’s Ocean

  My counterclaim is that people, despite appearances, are prone to be rational in possibly the most important sense. We have a propensity to produce ideas and beliefs spontaneously and abandon our beliefs in the light of contrary evidence. This idea of rationality can encompass ignorance, error, and bias. How?

  Gigerenzer has shown that biases are a way of adapting to the demands of your specific environment, providing in some cases accurate solutions to complex problems with simple heuristics while remaining ignorant of much of the relevant information. In the example of the recognition heuristic applied to the population size of cities, it only works because you are mostly ignorant of the cities. Gigerenzer’s work shows how we cope in domains that we can at least set the boundary to: a baseball park, guesses about city populations, and so forth.

  I go a little further than Gigerenzer: rationality can encompass bias, ignorance, and error in the domain of absolute uncertainty, where we can’t even define the likely boundary of the realm of the unknown. This is right up Scott Adams’s street: beyond the edge of the parochial “known” to the unfathomable reality that Adams claims we are not equipped to even hint at. However, in contrast to Adams, I’m suggesting that we can make fruitful forays into the unknown, even if fully exploring the unknown is an infinite task.

  We’re all governed by unjustified biases, we’re infinitely ignorant, and we’re al
ways prone to systematic errors. However, we can correct our mistakes and even arrange circumstances to adjust for our biases. That’s why we can then make indefinite progress in our knowledge, technology, and civilization. That’s the real and defensible meaning of the Shakespeare quotation.

  Adams, along with much bias research, underestimates the problem of knowing the world, an economic problem that, like all economic problems, takes time. Adams himself continually reminds us how much training and time it takes to become a Master Persuader or a trained hypnotist. Nevertheless, elsewhere, Adams dramatizes error by saying that we frequently “hallucinate” things. But making mistakes, even systematically, is not irrational. Everyone confronts a world that is not just mostly unknown to them but infinitely beyond their grasp. Anyone’s grasp! Forever!

  As Isaac Newton put it:

  I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the seashore and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me. (Brewster, Memoirs, Volume II, Chapter 27)

  Being rational in such a world, then, has to be a more modest, but powerful, propensity: trial and error, being open to changing our beliefs in response to the discovery of error.

  Rationality for humans cannot consist of acting without bias on all the relevant or even the known optimal amount of information. Only a god could do that. Adams is right there: we’re not gods. However, Newton wasn’t suggesting that we’re confined to the beach. I think he’s better understood as suggesting that the business of knowing the world is an infinite task. You can sail out into Newton’s unknown ocean by trial and error, eliminating your errors as you go, replacing them by better trials, better ideas.