Internship: Future of Humanity Institute at the University of Oxford
The Future of Humanity Institute at the University of Oxford is seeking interns for a paid internship to contribute to their work in the area of technical AI safety. Internships are for 2.5 months or longer, and will begin in or after January 2020, on a rolling basis. Possible examples of areas of programming to which you may be able to contribute include Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, Deep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.
THe Future of Humanity Institute’s conducts multidisciplinary research, which brings together individuals from academia who use mathematics, philosophy and social sciences to tackle questions about the future of humanity, under the leadership of Founding Director Professor Nick Bostrom.
Successful applicants for this internship will be expected to have research experience in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). A successful applicant would typically be in CS grad school, have a technical PhD, or have published work related to AI safety.
To apply, you can submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organisations. Please direct questions about the application process to Ryan Carey.