Nearly every major “victory” that humanity has experienced over the past ten thousand years—from agriculture to the eradication of smallpox—has come from human intelligence.
Nearly every major problem that humanity will face over the next century—from bioengineered weapons to AI to climate change—will also be the result of human intelligence.
As our species builds up a greater foundation of knowledge and develops ever-more-powerful tools, the stakes are only going up. Compare the best and the worst that a single human could accomplish with the resources available in the year 1018 to the best and the worst that a single human could accomplish in 1918, or to the best and the worst that could be accomplished in 2008. Over the coming decades, our ability to make the world a better place is going to rise meteorically—along with our ability to make disastrous mistakes.
And yet, human intelligence itself remains demonstrably imperfect and largely mysterious. We suffer from biases that still influence us even after we know they’re there. We make mistakes that we’ve made a dozen times before. We jump to conclusions, make overconfident predictions, develop giant blindspots around ego and identity and social pressure, fail to follow through on our goals, turn opportunities for collaboration into antagonistic zero-sum games—and those are just the mistakes we notice.
Sometimes we manage to catch these mistakes before they happen—how? Some people manage to reliably avoid some of these failure modes—how? Where does good thinking come from? Good research? Good debate? Innovation? Attention to detail? Motivation? How does one draw the appropriate balance between skepticism and credulity, or deliberation and execution, or self-discipline and self-sympathy? How does one balance happiness against productivity, or the exploitation of known good strategies against the need to explore and find the next big breakthrough? What are the blindspots that cause humans—even unusually moral and capable ones—to overlook giant, glaring problems, or to respond inappropriately or ineffectively to those problems, once recognized?
CFAR aims to create an arena where rationality geeks can come trade notes about how our minds work, and about ways we’re intentionally messing with how our minds work in a manner that seems to be making things better. We think minds, like most things, can sometimes be made better by thinking about it, trading ideas, playing around, etc, in a context of fun, freedom, and accurate observation of what effects seem to be following from what experiments.
Nuts & Bolts
CCFAR is a 501(c)(3) non-profit operating mostly remotely, originally founded in 2012 by Anna Salamon, Julia Galef, Valentine Smith, and Andrew Critch. As of September 2025, we have eight very-part-time curriculum developers / instructors, and some part-time people who do accounting and venue maintenance for us. We at once point had roughly a dozen full-time staff, but we are currently operating under the theory that very-part-time rationality development work is healthy for all involved, but that full-time work is more prone to going off the rails. We are low-key looking for additional very-part-time rationality developers, but are being extremely picky. Most of our curriculum development work consists in trying things out on volunteers and each other (partly at weekly online test sessions), and in running workshops.