Job Description
The AI Security Institute is seeking a Research Manager to lead its Alignment team in London. This team focuses on preventing AI models from autonomously causing harm. The role involves managing a team of experts, including theorists and ML research scientists, and fostering a strong learning culture. The Research Manager will also contribute to sourcing talent, providing mentorship, and guiding research directions.
Role involves:
- People managing a team of researchers.
- Growing the team by sourcing and recruiting top talent.
- Providing management, feedback, and coaching to team members.
- Conducting foundational research on AI safety.
- Breaking down the alignment problem through safety case sketches.
- Making funding recommendations for external research projects.
Requirements:
- Experience managing research or engineering teams.
- Strong understanding of large language models.
- Broad knowledge of alignment approaches.
- Strong communication skills.
- Track record of helping teams achieve exceptional results.
- Experience working with multi-disciplinary teams.
- Strong academic record.
Role offers:
- Mentorship and coaching from research directors.
- Autonomy to pursue exciting research directions.
- Opportunity to work with world-leading experts.
- Pension options available through the Civil Service website.