Browse All Jobs
Job Description
Box is seeking a highly skilled and visionary Staff Security Engineer to lead the security strategy and implementation for Generative AI and Agentic AI technologies within Box's platform. The Staff Security Engineer will be instrumental in designing, developing, and operationalizing security controls that address the novel risks introduced by autonomous AI agents and generative models. Additionally, the Staff Security Engineer will drive strategic initiatives to leverage LLMs to enhance Box's secure development lifecycle. The role ensures that Box remains a trusted leader in AI-powered content management by embedding security-by-design principles into all AI features and tooling. The role involves:
  • Leading the design and implementation of security architectures specifically tailored for Generative AI and Agentic AI systems.
  • Developing threat modeling approaches adapted for dynamic, non-deterministic AI agent behaviors.
  • Building and integrating advanced security tooling and automation to detect, prevent, and respond to AI-specific vulnerabilities.
  • Spearheading the strategy for integrating LLMs into the secure development lifecycle.
  • Designing and implementing AI-powered security tools that can analyze code, identify potential vulnerabilities, and recommend secure coding patterns at scale.
  • Collaborating closely with product, engineering, and compliance teams to embed secure-by-default configurations and user consent checkpoints.
  • Driving continuous improvement of AI security posture by researching emerging attack vectors.
  • Mentoring and guiding other engineers on secure AI development practices.
Requirements:
  • Experienced security engineer with 5+ years in application security, DevSecOps, or security tooling, ideally with exposure to AI/ML security challenges.
  • Deep understanding of AI agent architectures, generative AI models, and associated security risks.
  • Proven track record implementing security tools and automation integrated into CI/CD pipelines at scale.
  • Experience with or strong interest in applying LLMs to security use cases.
  • Demonstrated ability to translate security requirements into practical AI applications that enhance the secure development lifecycle.
  • Skilled in threat modeling methodologies and able to adapt traditional frameworks to dynamic AI systems.
  • Proficient in at least one scripting language (e.g. Python) and familiar with multiple programming languages, cloud-native environments and container security.
  • Strong communicator capable of articulating complex AI security concepts to both technical and non-technical stakeholders.
  • Passionate about cybersecurity innovation.
  • Growth mindset with a proactive approach to learning and problem-solving in fast-evolving technology landscapes.
Box offers:
  • Opportunity to lead the security strategy and implementation for Generative AI and Agentic AI technologies.
  • Chance to drive strategic initiatives to leverage LLMs to enhance the secure development lifecycle.
  • Collaboration with product, engineering, and compliance teams.
  • Mentoring and guidance opportunities for other engineers.
Apply Manually

Box

Box is a leading provider of intelligent content management solutions. Its platform empowers organizations to collaborate effectively, manage content lifecycles, secure critical assets, and transform business workflows using enterprise AI. Founded in 2005, Box simplifies work for global organizations across various industries. The company's mission is to bring intelligence to content management, enabling customers to transform workflows across their organizations. Box is headquartered in Redwood City, CA, with offices across the United States, Europe, and Asia.

All Jobs at Box (91)