MIRI Summer Fellows program 2016
The fellows program is designed for people interested in gaining the skills needed for research in the field of AI alignment – the problem of how to ensure, as artificial intelligence becomes more advanced, that its behavior is aligned with human interests – and other open problems that bear on the long-term impacts of artificial intelligence.
Applications are now closed.
The program will take place in the San Francisco Bay Area from June 19 to July 3, 2016, and will be offered free of charge for all selected participants. Some travel assistance may also be available.
An overview of the content:
- MIRI-led training in technical subjects relevant to AI alignment research.
- Discussions about the long term effects of artificial intelligence on society: what should we expect, how much confidence should we have in our predictions, what should we do now, etc.
- CFAR’s popular workshop on applied rationality. This is an immersive, four-day experience at a retreat center. You’ll work in small groups with instructors to learn tricks for tackling hard problems, how to make plans you’ll reliably follow through on, how to weigh high level priorities, change your ingrained habits, and more. (More information here.) Participants normally pay $3,900 to attend, but it’ll be free for fellows. You’ll also be learning additional skills that go beyond what we typically teach at our workshops, and are closer to how we train our mentors (talented alumni who return to help us teach classes at future workshops).
- A unique CFAR curriculum on advanced reasoning skills. We’ll be covering the epistemic skills that are less relevant to everyday decision-making, and more relevant to tackling unprecedented global problems. For example:
- “How confident can we be in future predictions?”
- “How do you decide how much to update based on expert opinions (and which experts are most trustworthy)?”
- “What biases affect our ability to plan for the future?”
Ideally, applicants will have a math or computer science background and some pre-existing interest in the AI alignment problem. (The classes and discussion will be pitched at a sophisticated level, but we do expect to be able to bring you up to speed pretty quickly if you’re a talented beginner.)
Background reading on the AI alignment problem:
- The Future of Life Institute’s open letter on the importance of the alignment problem, signed by leaders in tech and computer science
- Artificial intelligence expert Prof. Stuart Russell’s short lecture on the alignment problem, delivered at this year’s World Economic Forum
- Prof. Stuart Russell’s collection of relevant links
- Oxford philosopher Nick Bostrom’s book Superintelligence