Job Description
The AI Security Institute is seeking a Cyber Security Researcher to join their Cyber Evaluations Team in London. This team is dedicated to understanding AI capabilities and risks, with a focus on security implications. The researcher will contribute to the development of infrastructure for benchmarking AI capabilities in cybersecurity. The AI Security Institute combines the agility of a tech start-up with the expertise and mission-driven focus of government.
Role involves:
- Designing CTF-style challenges for evaluating AI systems.
- Advising ML research scientists on analyzing cyber capability evaluations.
- Writing reports and research papers.
- Evaluating general-purpose models with red-teaming automation tools.
- Staying updated with related research.
Requirements:
- Experience in cyber-security red-teaming (penetration testing, cyber range design, CTFs, automated security testing tools, bug bounties).
- Ability to communicate cyber security research to technical and non-technical audiences.
- Familiarity with cybersecurity tools like Wireshark, Metasploit, or Ghidra.
- Software skills in network engineering, secure application development, or binary analysis.
Role offers:
- Opportunity to improve the safety of AI systems.
- Engagement with the cybersecurity community.
- Flexibility for remote work.