Job Description
The AI Security Institute is seeking a Cognitive Scientist to join their alignment team in London. This team focuses on preventing AI models from autonomously causing harm. The Cognitive Scientist will contribute to a research agenda, design experiments, and supervise external research. The role involves working with a multi-disciplinary team to address urgent risks associated with AI.
Role involves: - Developing and publishing a research agenda for cognitive science in alignment.
- Designing experiments and human studies.
- Supervising external research.
Requirements: - Relevant cognitive science research experience.
- Broad knowledge of alignment approaches.
- Strong writing ability.
- Ability to work autonomously.
- Understanding of large language models.
- Experience working with multi-disciplinary teams.
Role offers: - Opportunity to work on AI safety and alignment.
- Involvement in research and experimentation.
- Supervision of external research projects.
- Work in multi-disciplinary team