Workshop on AI Safety Strategy

 
Program Purpose:
WAISS will allow 12 selected participants who are serious about AI risk, but who may not yet be successfully translating each week into substantial forward progress on these issues, to:

  • (1) Choose what personal path they wish to pursue, toward impacting AI risk; and
  • (2) Acquire the skills, motivation, orientation, network, etc. that they need to sprint effectively on their chosen path.

 
Program format
WAISS participants will begin by attending the May 18-22 CFAR workshop. (You may wish to read a sample schedule, testimonials, or general workshop information.)
 

After a few days’ break, WAISS participants will go on to the WAISS program proper, which runs from May 29 to June 5. Days will be packed with a combination of formal units (taught sometimes in the group of 12, and sometimes in small breakout sessions) and structured small-group conversations and brainstorming.
 

The program is free to participants, including room, board, and occasional flights scholarships.

 
Curriculum
WAISS participants will acquire:

  • The skills taught in our mainline workshops;
  • A deeper fluency with those same applied rationality skills; the ability to improvise new rationality techniques when needed; etc.
  • Forecasting skills, and Fermi skills, along the lines of those described in Superforecasting and How to Measure Anything (including the practical application of Bayes’ theorem; calibration and Fermi practice; practice breaking big questions into smaller subquestions; practice with the “Bayesian entanglement game”, in which hard-to-observe unknowns are linked up with easier-to-observe unknowns that players can use to revise their bets; etc.).
  • Background knowledge of what efforts are currently being attempted around AI risk, and the views and contact information of the people involved
  • Practice modeling the skills needed for difficult unfamiliar tasks, noticing what gaps are present in their own skillsets vis a vis these tasks, and charting a path to solve these gaps.

 

Most activities will be open-ended, such that participants with differing levels of initial background can adapt the activity to further their own learning.

 

In addition, participants will have structured time to think through what paths they may wish to pursue, what knowledge, contacts, “cheap experiments”, etc. might help with this decision, what next actions can further this path, etc.
 

Background reading
Before attending the program (but not necessarily before applying), participants will familiarize themselves with the contents of Nick Bostrom’s book Superintelligence, and with Rationality: AI to Zombies (aka the old Less Wrong Sequences).
 

Applications are now closed.