Frequently Asked Questions about CFAR

  1. What is rationality?
  2. What is applied rationality?
  3. Who’s to say that one person is more rational than another?
  4. Does rationality guarantee success at everything?
  5. How can rationality improve the world?
  6. Isn’t intuition really valuable, though?
  7. But you can’t rationally analyze every decision you make, you’d go crazy!
  8. Doesn’t acting rationally mean acting selfishly?
  9. Doesn’t being rational mean being emotionless?
  10. Yeah, but, sometimes being irrational is a good thing, right?
  11. But isn’t reality subjective? How can you claim there’s one version of truth?
  12. So do you guys think you’re perfectly rational?
  13. What has science learned about improving rationality?
  14. What are the hurdles to improving rationality?
  15. How can you measure rationality?
  16. What was the inspiration for CFAR?
  17. Who comes to your workshops?
  18. What kinds of things do you teach?
  19. How are you funded?
  20. Are you going to run workshops in my city soon?
  21. Where will the upcoming CFAR workshop in Europe be held?
  22. Where will the upcoming CFAR workshop in Boston be held?
  23. Are you trying to make rationality part of primary and secondary school curricula?

What is rationality?

In cognitive science, “rationality” means (1) forming accurate beliefs about the world and (2) taking effective actions to achieve your goals, given what information you have.

“Perfect rationality” is only a theoretical concept that is not really a realistic goal, because our brains have limited computational resources and other constraints arising from our evolutionary history.

So what improvements can we make? Where to start? Well, we can train to overcome cognitive biases (the common mistakes that human brains make), like the tendency to underestimate how long things will take us, or to ignore evidence that doesn’t tell us what we want to hear. We can also improve our general tendency to take effective actions toward our goals by understanding how human motivation and attention work and then training to use them more effectively.

What is applied rationality?

The process of learning about how your brain actually thinks, and how to make your thinking more rational where it really counts, is what we at CFAR call “applied rationality.” Applied rationality means doing the best we can with the brains we’ve got, partly by learning from the science of how our brains actually work. It means training to change our minds in response to good arguments, by actively combating obstructions within ourselves like confirmation bias and feelings of defensiveness. It means reflecting on our behaviors and noticing patterns we can improve on so that we stop making the same mistakes again and again. It means taking time to regularly think about what we most care about, and how we can change our strategies in response to new evidence. And, critically, it means managing our attention, motivation, and stress levels so that we have the mental resources to actually make the changes we need.

Who’s to say that one person is more rational than another?

We try not to think in those terms, for a few reasons. One is that, like intelligence, rationality has many components. It’s more productive to focus on various aspects of your rationality where you want to improve, than to think “I’m not rational enough” or “that person is more rational than me”.

Does rationality guarantee success at everything?

Probably not, since there are some major factors influencing your success that aren’t under your control. Your genes, the evidence and resources available to you, and sheer luck will always play a big role in your success (or lack thereof). Rationality is a set of tools, not an ironclad guarantee. On the plus side, a rational person may have a slightly better idea what, exactly, they’re trying to succeed at.

How can rationality improve the world?

Rationality training doesn’t only benefit those who learn it – it’s a significant public good as well. If applied rationality training were more widespread, we’d demand that politicians be able to back up their claims with solid evidence, and we’d notice when they were misdirecting us by playing on our emotions. We wouldn’t overestimate risks of events like kidnapping or terrorism, simply because those events are vividly reported by the news and thus feel disproportionately salient. We’d actively seek out arguments against our current views, curious about how we might be wrong, and give those opposing arguments a fair shake. And we’d be able to think about the consequences of policies (like nuclear power, or GMOs, or immigration) separately from our strong feelings around those issues.

Many of our students are also interested in how to take proactive steps now to change the world for the better. Rationality is central to that process, giving you the tools to use in thinking about questions such as: In what career can I make the greatest marginal contribution? How can we improve the accuracy of scientific research? What research project could I tackle that would have the highest expected benefit for humanity?

Isn’t intuition really valuable, though?

Yes! Rationality means learning how to use both intuition and analysis. Both kinds of thinking are indispensable, but cognitive scientists have found that it’s crucial to know when to use which kind, because they each have their strengths and weaknesses.

Your intuition tends to be reliable in cases where you have a lot of practice (like a nurse developing intuition about when an infection is going to get worse, or a firefighter developing intuitions about when a building is going to collapse). It also tends to be reliable in cases that the human brain evolved to be good at — like picking up on social cues. You may not be able to pinpoint exactly why you think someone doesn’t like you, but that doesn’t mean your intuition isn’t picking up on very real evidence [see Kahneman & Klein (2009), “Conditions for Intuitive Expertise”]. Applied rationality helps you be aware of your intuitions and weight them appropriately.

In some cases, though, your intuition doesn’t work as well. Situations where you don’t have a lot of experience, situations where you need to estimate probabilities, situations where you are emotionally fraught, and so on. Applied rationality helps you figure out how and when to make use of intuition and when to lean more heavily on analysis.

But you can’t rationally analyze every decision you make, you’d go crazy!

That’s true! Most decisions don’t matter very much (e.g., “Should I order the raspberry pie or the blueberry pie?”). And other things are easily handled by your brain’s autopilot or intuition — like deciding how to walk, or picking up on social cues.

So the important thing is noticing which decisions deliberation can really help with. These are usually higher-stakes issues, taking up a lot of time or money, or involving some non-negligible risks. And as it turns out, these are often the very same decisions on which we’re most vulnerable to cognitive biases. We’re bad at visualizing and understanding intuitively the difference between one million and 1.5 million, so analytic reasoning can help us keep the facts in mind.

Doesn’t acting rationally mean acting selfishly?

No. Acting rationally means making choices that get you as much as possible of what you want – but “what you want” could be completely altruistic.

And in practice, it often is. Most people don’t care only about themselves – they also care about their families, friends, community, people they’ve never met before, people who aren’t even alive yet, or animals. And most people consider it worth spending at least some time, effort, or money helping those other beings.

However, some ways of helping people work much better than others. Some programs have been convincingly shown not to work, like this after-school program for underprivileged children. And even among programs that do work, some yield tens or hundreds of times more benefit than others. So if you want to help other people, rationality training means learning how to do many times more good per dollar, or per hour, than you did before.

Doesn’t being rational mean being emotionless?

No. This is a popular misconception (in fact, it’s got a name: the Straw Vulcan). Viewing the world accurately in no way prevents you from feeling joy, excitement, aesthetic appreciation, love, or any other positive emotion. Just ask Carl Sagan, or Neil deGrasse Tyson, or other advocates of a rational worldview who certainly don’t lack joy and excitement.

In fact, rationality can be a powerful tool to improve your emotional stability. Depression, anxiety, anger, envy, and other unpleasant and self-destructive emotions tend to be fueled by what some psychologists call “cognitive distortions.” Those are irrationalities in your thinking, such as jumping to conclusions based on limited evidence, focusing selectively on negatives, all-or-nothing thinking (“I can’t do anything right”), and blaming yourself or someone else without reason.

Cognitive therapists have a very well validated record[1] of making people happier and less anxious by getting them into the habit of noticing those distortions and asking themselves questions like, “What evidence do I have for this belief?” Outside of clinical practice, many people boost their happiness by keeping a gratitude journal. That’s a great example of applied rationality in action.

Yeah, but, sometimes being irrational is a good thing, right?

Occasionally. For example, you might feel happier if you irrationally believe that the world is better than it really is. The down side to that kind of irrationality, of course, is that it prevents you from taking action to improve things. Believing “Everyone likes me!” is certainly more pleasant than believing “Actually, I get on a lot of people’s nerves,” but if the latter is the truth, you’re cutting yourself off from plenty of valuable friendships, relationships and professional opportunities if you let yourself believe the former rather than learning to improve your style of interaction.

You can also reap social benefits from being irrationally overconfident in your own traits – like your intelligence, or attractiveness, or skills — because your resultant confidence can persuade other people to view you more positively.

Of course, you can also suffer consequences from that overconfidence. In several studies, subjects with especially high self-esteem chose to take on more difficult tasks, failed more often, and ended up with less money than others.[9] In another study, traders who believed that randomly-generated successes were actually due to their own skill did more poorly in their actual trades.[10] (See Ray Kurzban’s Why Everyone (Else) is a Hypocrite[11] for a discussion of these and related studies.)

In other words, as Kurzban says, “…holding aside the social effects of decision making, if you have an overly high opinion of your skills, then as long as the judge is cold, hard reality, you’re worse off than if you simply had the correct opinion of your skills, even if they are modest.”[12]

But isn’t reality subjective? How can you claim there’s one version of truth?

Some questions are arguably subjective. For example, questions about whether a painting is beautiful or ugly, whether a certain person is “charmingly effervescent” or “annoyingly hyper,” or what an ideal life would entail. Those questions are subjective in the sense that, if you and I disagree about them, it’s not because we have mutually exclusive beliefs about how the world is, it’s because we have different reactions to the world. And there’s no “correct” reaction, so there’s no contradiction if our reactions differ.

People can also have different beliefs about the world, without being irrational, if they’ve each encountered different evidence. You might estimate that the average level of niceness in the world is higher than I estimate it to be, if you’ve grown up around nicer people than I have. In this case our estimates are mutually exclusive, so we can’t both be right — but each of us might still have arrived at our respective estimates given the evidence available to us.

But there’s still only one right answer on matters of fact, even if no one knows what it is. As Philip K. Dick said: “Reality is that which, when you stop believing in it, doesn’t go away.”

So do you guys think you’re perfectly rational?

Goodness, no. We became interested in rationality in large part because we wanted to become more rational, not because we thought we already were perfectly rational. We’re constantly looking for our own irrationalities, and trying out new approaches for making improvements.

For example, during the June workshop, Julia was talking with Cat about a presentation Cat had prepared to give. As it turned out, one of the students was an expert in that very topic, and had offered to give the presentation instead, if we wanted.

Cat: “She’ll almost certainly do a better job than I would, because she’s got more background in the subject.”
Julia: “That’s great, but you already put in a lot of work on your presentation, so you should give it.”
Cat: “Isn’t that a sunk cost, though? My preparation time’s already spent no matter what we do, so we should just decide which presentation’s going to give the best result.
Julia: “Oh… you’re right!”

Even though we’re not perfectly rational and never will be, we have gotten better at spotting and acknowledging some of our own irrationalities, and more practiced at fixing them. So we have a lot of hard-won tips to pass on to other people who would like to improve their rationality too.

What has science learned about improving rationality?

While research on cognitive biases has been booming for decades, we’ve spent more time identifying biases than coming up with ways to evade them.

There are a handful of simple techniques that have been repeatedly shown to help people make better decisions. “Consider the opposite” is a name for the habit of asking oneself, “Is there any reason why my initial view might be wrong?” That simple, general habit has been shown to be useful in combating a wide variety of biases, including overconfidence, hindsight biases, confirmation bias, and anchoring effects [see Arkes, 1991; Arkes, Fault, Guilmette, & Hart, 1988; Koehler, 1994; Koriat, Lichtenstein, & Fischhoff, 1980; Larrick, 2004; Mussweiler, Strack, & Pfeiffer, 2000]

Most of us sometimes fall prey to the planning fallacy, where we underestimate the amount of time it’s going to take us to complete a project. But one strategy that’s been shown to work, and which we teach in our workshops, is “reference class forecasting,” which entails asking yourself how it’s taken you, or people you know, to complete similar tasks. [see Buehler, Griffin, & Ross (2002)

A third technique which has strong empirical backing (though it is not typically classified under “de-biasing” research”) is cognitive therapy, which has successfully improved participants’ depression and anxiety by the use of rational thinking habits like asking oneself, “What evidence do I have for that assumption?” Cognitive therapy in particular is an encouraging demonstration that simple rational thinking techniques can become automatic and regularly used, to great effect.

What are the hurdles to improving rationality?

There are a few things scientists have discovered to be sticking points in their attempts to improve rationality.

One of the main reasons debiasing interventions fail to improve people’s rationality is what’s called the bias blind spot: simply put, people don’t believe they’re biased, even when they’re taught about scientific evidence of the bias in the general population. Sure, they think, that might apply to other people, but not to me!

One useful way to combat the bias blind spot is with a quick and striking demonstration to convince people that they, themselves, are biased. We regularly do this with the overconfidence bias, for example, by asking people a series of trivia questions and asking them to give an interval for each question within which they’re 90% confident the true answer falls. Nearly without fail, only 60% or fewer of the intervals turn out to the true answer – and that’s because we’re all overconfident in what we think we know.

We’re also starting out with a leg up over previous studies, because we’re starting with people who are proactively trying to improve their rationality. If they were very resistant to the idea that they might be biased in some way, they wouldn’t have signed up for our workshops. (In the long run, we’d love to improve everyone’s rationality, but we’re starting with the low-hanging fruit!)

Another obstacle to debiasing is the problem of domain-transfer: people successfully learn a principle in the classroom, but don’t think to apply it, or don’t see how to apply it, in other contexts. [Willingham, 2007] That’s why, for example, even statistically-trained PhD students will fall for a basic statistical fallacy, like the conjunction fallacy, over one-third of the time. [Tversky and Kahneman 1983]

That’s why we have our students practice applying rational thinking habits on the same domain in which they want to be able to use those habits: real issues from their own lives. (By contrast, most debiasing interventions illustrated the principle with artificial examples that did not directly relate to participants’ lives.)

Finally, the third big obstacle to debiasing is the problem of getting the principles to stick, long-term. A general rule of education is that learning something once isn’t enough to retain your access to it indefinitely – you need intermittent practice sessions or reminders of it. So for example, Fong and Nisbett (1991) showed that students surveyed right after taking a statistics class were successful at employing statistical reasoning in a new domain — but significantly less so, two weeks later. This is why cognitive therapy requires you to do regular “homework” until the rational thinking habits are, well, habitual.

One way we’re addressing the problem of sustained improvement is with the development of rationality apps that encourage participants to practice regularly. For example, to overcome overconfidence, we’re working on a “Prediction Game” that invites players to make predictions about events in their own near future, and state a corresponding confidence level. Over time, their success rate in various domains can be compared with the confidence they felt when making the prediction.

Even more important, however, is the creation of a community of aspiring rationalists. Being around other people who are discussing and practicing a skill is one of the best ways to maintain your motivation to practice it.

How can you measure rationality?

No one’s ever devised a “Rationality Quotient” akin to the “Intelligence Quotient” (IQ). But there’s no reason one couldn’t! Stanovich has written, “There is now enough knowledge available so that we could, in theory, begin to assess rationality as systematically as we do IQ.”[15]

We already have many potential components of a rationality test, in the form of all the methods cognitive psychologists have devised to test for people’s susceptibility to various cognitive biases, like the anchoring effect, or halo effect, or overconfidence.

Scientists can also test for more general rational-thinking habits – like your tendency to use deliberative reasoning in situations where you haven’t been explicitly instructed to do so. Consider the question, “A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?”[16]

Most people – including most students at top universities like MIT, Princeton and Harvard — get this simple question wrong.[17] Stanovich explains, “Many people offer the response that first comes to mind—10¢–without thinking further and realizing that this cannot be right. The bat would then have to cost $1.10 and the total cost would be $1.20 rather than the required $1.10.”[18] It’s a result that’s been replicated by multiple researchers, using other questions where you’ll get the wrong answer if you rely on your intuitive answer.

Scientists can also test your tendency to evaluate evidence objectively, despite whatever personal biases you might have. For example: When you’re asked to find flaws in an experiment, do you find more flaws if its conclusion contradicts your prior opinions? Klaczynski and others have run this study and find very little correlation between this tendency, which they call “myside bias,” and intelligence. [(Klaczynski, 1997; Klaczynski & Lavallee, 2005; Klaczynski & Robinson, 2000)]

What was the inspiration for CFAR?

Anna Salamon, Julia Galef, and Michael (“Valentine”) Smith founded CFAR in early 2012. We shared backgrounds in math, social science research, and philosophy, and a passion for improving future outcomes in the most high-impact way possible. We all had been following the surge of research on heuristics and biases in the last 40 years, and the surge of public interest in books about those topics in the last 10 years — and had wondered why there wasn’t anyone taking that research and helping the interested public apply it to their own decisionmaking. Our backgrounds seemed ideally suited to taking on that task: Anna and Julia had spent years doing public communication about rationality, and Val was finishing a PhD in math education.

Who comes to your workshops?

It’s a diverse group from all over the world, ranging from high schoolers to people in their 50’s and 60’s. The most common backgrounds are entrepreneurs, students, software engineers, and scientists, but we’ve had plenty of representation from other fields too: linguists, teachers, artists, therapists, hedge fund managers, and doctors. Our admissions process is primarily looking for people who are intellectually curious, friendly, and driven to create great new things not just in their own lives but for broader society.

What kinds of things do you teach?

A whole array of skills, habits and techniques. For example:

  • Noticing when the thing you’re doing isn’t helping you. For example, when you’re having the same argument over and over, or when you’re doing something out of habit.
  • How to make plans (for getting work done, learning a skill, etc) that are more likely to succeed. For example, by simulating your plan in concrete detail, so you can notice points where it might fail (and work around them).
  • How to notice rationalizations, so you can actually fix problems rather than denying they exist.
  • How to have disagreements in which you’re working together to figure out the answer to a question, rather than fighting to “win.”
  • How to recognize feelings of defensiveness or anger that might be coloring your judgment.
  • How to do mental thought experiments to help you figure out what you want.

…and a lot more.

Here’s a sample schedule from one of our recent workshops.

How are you funded?

We’re committed to being as self-sustaining as possible, so we charge rates for our workshops commensurate with what it costs us to run them.

We are a 501(c)(3) nonprofit, however, so we’re able to receive tax-exempt donations. Donations are particularly important for enabling us to give scholarships to students, teachers, activists, and other people who are talented and motivated to change the world with rationality but don’t have the capital to pay their own way at a workshop. They’re also important for enabling us to do the empirical research to test out our curriculum and iteratively improve it, contributing to the scientific body of knowledge on rationality in the process.

Are you going to run workshops in my city soon?

Possibly! In 2013, we ran four-day workshops in Berkeley and New York, and we ran a beta test of a mobile, two-day workshop in Salt Lake City. In 2014, we ran four-day workshops in Berkeley, New York and Melbourne, and we’re investigating the feasibility of doing satellite workshops elsewhere.

Where will the upcoming CFAR workshop in Europe be held?

We don’t know yet, but we’re working on it!

Where will the upcoming CFAR workshop in Boston be held?

We don’t know yet, but we’re working on it!

Are you trying to make rationality part of primary and secondary school curricula?

We’d love to include decisionmaking training in early school curricula. It would be more high-impact than most other core pieces of the curriculum, both in terms of helping students’ own futures, and making them responsible citizens of the USA and the world. Basic statistical thinking would be a crucial part of that curriculum change.

In 2012-3, we partnered with UC Berkeley and Nobel Laureate Saul Perlmutter to develop a class for Berkeley students titled “Sense & Sensibility & Science” to give students cognitive science tools and habits to use for the rest of their lives.

At the moment, we don’t have the resources or political capital to change public school curricula, so it’s not a part of our near-term plans; instead, we’re running private workshops for high school students (SPARC) each summer. But if you or someone you know is interested in working with school systems and has the means to get started, get in touch with us!

[1] A meta-analysis of the efficacy of cognitive therapy for depression. Dobson, Keith S. Journal of Consulting and Clinical Psychology, Vol 57(3), Jun 1989, 414-419.
[2] Arkes, Hal; Blumer, Catherine (1985). “The Psychology of Sunk Cost”. Organizational Behavior and Human Decision Process 35: 124–140
[3] Harmon-Jones, E. , Harmon-Jones, C. 2007. Cognitive dissonance theory after 50 years of development. Zeitschrift fur Sozialpsychologie (38): 7-16.
[4] Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.
Tversky, Amos; Kahneman, Daniel. Psychological Review, Vol 90(4), Oct 1983, 293-315.
[5] Kahneman, Daniel. Thinking Fast and Slow. Farrar, Straus and Giroux (2011). P. 402.
[6] Tversky, A. & Kahneman, D. (1973). Availability: A Heuristic for Judging Frequency and Probability. Cognitive Psychology, 5 (2), 677-695.
[7] http://psy2.ucsd.edu/~mckenzie/nickersonConfirmationBias.pdf
[8] Latane, B. and Darley, J. (1969.) Bystander “Apathy”, American Scientist, 57: 244-268.
[9] Baumeister, R.F., Heatherton, T.F., & Tice, D.M. (1993). When ego threats lead to self-regulation failure: Negative consequences of high self-esteem. Journal of Personality and Social Psychology, 64,141-156.
[10] Fenton-O’Creevy, M., Nicholson, N. and Soane, E., Willman, P. (2003) Trading on illusions: Unrealistic perceptions of control and trading performance. Journal of Occupational and Organisational Psychology 76, 53-68.
[11] Kurzban, R. (2011). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.
[12] Kurzban, R. (2011), page 114
[13] Crandall, B., & Getchell-Reiter, K. (1993). Critical decision method: A technique for eliciting concrete assessment indicators from the “intuition” of NICU nurses. Advances in Nursing Sciences, 16(1), 42–51.
[14] Klein, G. A., Calderwood, R., & Clinton-Cirocco, A. (1986). Rapid decision making on the fireground. In Proceedings of the Human Factors and Ergonomics Society 30th Annual Meeting (Vol. 1, pp. 576 –580). Norwood, NJ: Ablex
[15] Stanovich, K. (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven, Connecticut: Yale University Press. Page 4.
[16] Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin & D. Kahneman (Eds.), Heuristics and Biases (pp. 49–81). New York: Cambridge University Press.
[17] Frederick, Shane. 2005. “Cognitive Reflection and Decision Making.” Journal of Economic Perspectives, 19(4): 25–42.
[18] Stanovich, K. E., & Stanovich, P. J. (2010). A framework for critical thinking, rational thinking, and intelligence. In D. Preiss & R. J. Sternberg (Eds.), Innovations in educational psychology: Perspectives on learning, teaching and human development (p. 219). New York: Springer.