Mission

Mission Q&A

Q: I love rationality, but I don’t care about existential risk. Should I still come to a CFAR training?

A: Yes. Or at least, we’d love it if you apply. Our mission may be to reduce existential risk via rationality, but our method involves cultivating a robust rationality community that applies rationality to varied problems, nerds out about how the rationality techniques did or didn’t work in particular contexts, develops variants of the techniques, etc. (Many of our trainings are aimed at rationality hobbyists, with no mention of existential risk.)

Q: I care about existential risk, but I don’t care about rationality. Should I come to a CFAR training?

A: We think so. We think it is far easier to reduce existential risk if you train in rationality, and that CFAR workshops are relatively easy and pleasant ways to get started here.

Q: I’m totally uninterested in existential risk and rationality training. Should I come to a rationality training?

A: No.

“Actually trying to figure things out”

Some brief orientation:

  • We were founded in 2012;
  • We exist because:
    1. We care about the long-term future, and think the coming century may be pivotal, Nick Bostrom style; and
    2. We suspect thinking skills, dialog skills, and the social fabric to support them may be key to making that pivot a good one.
  • We’re a 501(c)(3) nonprofit;
  • We’re about nine full-time people plus a bunch of other collaborators;
  • Our strategy is to focus on depth over quantity; that is, we provide high-quality training to a small number of people we think are in an unusually good position to help the world, rather providing lower-quality training to a large number of people.
  • Mostly, we do this via immersive, small-group rationality workshops.
  • By “rationality”, we mean something like: “Actually trying to figure out what the world is made of, and how to successfully act on it, in collaboration with other people doing same.”

We’re not what people expect. And it’s important to be clear and unhidden1 if we’re going to get anywhere. Three basic things about us:

First: We see “original seeing” and “original making” as the key bottlenecks on existential risk. We see “agreeing with us” as zero-value.

  • We don’t want to help people to say “correct” words about existential risk, if they can’t rederive where those words came from. We want people who can bring their own mental machinery to bear on the problem, so that their agreeing or disagreeing with us will have informational content.
  • We would rather build one thinker/doer who has enough context to act well and independently, versus fifty who know the right passwords (even if they try to act on their passwords!).
  • We suspect such context is best transferred via organic, one-on-one, intrinsically motivated conversation, and that methodologies such as double crux can help.
  • We suspect it’s also important for such thinkers/doers to:
    • Persistently seek to develop first-principles models (rather than socially imitated, local heuristics);
    • Always seek to trade evidence and reasoning methods, rather than to persuade;
    • Work to create a social fabric that is adequate to support joint truth-seeking and other cooperative endeavors.
  • We suspect that in the absence of such principles, outreach efforts can easily be worse than useless. We’re not trying to raise awareness, we’re trying to improve discernment.

Second: We believe that in working to reduce existential risk, there is no honest way to avoid the inside view. So we try to act fearlessly on ours2.

  • Physics is established, so one can defer to existing authorities and get right answers about physics. Starting a well-run laundromat is also established, so ditto. Physics and laundromat-running both have well-established feedback loops that have validated their basic processes in ways third parties can see are valid.
  • These are well-established fields, so you can in fact get the right answer just by copying the knowledge.
  • AI Safety and existential risk reduction are not solved fields in this way. There are people trying research trajectories, and outreach efforts, and so on. They are collecting and sharing data on what happens. But there is no well-validated overall framework for interpreting the data that come in.
  • One is therefore stuck making sense of the data (and of others’ opinions and arguments) from within a not-yet-validated inside view, in a context where others will disagree about what best guesses one should or shouldn’t make in view of this data.
  • This is one reason why creating independent seers and makers seems so essential to us. The key work that will need to be done is work of seeing, making, and dialogue. We aim to create people who can help with this bottleneck; and we believe this requires original, organic, non-propaganda-produced orientation and thinking in our independent thinkers and doers.
  • Inside view is terrifying because it’s often wrong. But pretending to avoid inside view never, ever helps. Data can help, as can attempting to puzzle out the causes of disagreements. But these can’t replace our own judgment; they can only be inputs into it. And in a field like “let’s cause Earth to be more likely to make it through the century”, where we can’t exactly validate strategies as successfully impacting the final goal across 200 randomized earths… the question of how to interpret those inputs is actually hard.
  • It is of course also important to keep in mind that our inside view is almost certainly at least partially wrong, and to keep our eyes open for anomalies (or, better yet, seek tests). But the correct response to uncertainty is not half-speed; our goal is something like “clear opinions, weakly held, but fully acted on and tested”.[2]

See also: The virtue of Humility; Zero to One; The Structure of Scientific Revolutions.

Third: We are focused specifically on existential win, and on the people, social fabric, and thinking skills that might most help with that. We see AI safety as especially key here.

  • Narrow focus is super useful.
  • Once we pick out AI safety, we have a nice narrow target from which to backchain. We can, for instance, ask whether there are people who’ve done work that seems likely useful for AI safety, who also say they wouldn’t’ve been able to do it without CFAR. We can then use such people as feedback and try to get more of them.
  • We also run some trainings aimed at developing our rationality curriculum (rather than deploying it). And we sometimes run these trainings on people chosen for having different skills than the current AI safety crowd so as to learn those skills. We take seriously the claim that innovation comes partly via play or exploration.
  • However, all our workshops aim narrowly at: (a) AI safety directly; or (b) developing our rationality.

  1. To unpack the word “unhidden”: there are usually local incentive gradients toward hiding relevant information so as not to be visibly wrong, not to be visibly against anything someone else is for, etc. It seems important to ignore those incentive gradients and refuse to hide if one wants to get anywhere. Particularly if one wishes to do this in a high-integrity, sustainable way, in cooperation with others.

  2. Actually, this is an approximation. More exactly: with CFAR’s own resources, we try to act fearlessly on our inside view (“clear opinions, weakly held, but fully acted on and tested”). With the “commons” (e.g., the public image of AI-related existential risk), we try to defer to or cooperate with the consensus of others in this space. We like Bostrom’s article on the unilateralist’s curse.