AI Safety Analyst
Job Description
Ensuring AI systems are safe, reliable, and beneficial is paramount for their widespread adoption. As an AI Safety Analyst, you will play a critical role in rigorously testing and evaluating AI models, identifying potential risks and guiding their development towards ethical and robust outcomes.
Key Responsibilities
Design and execute comprehensive safety tests to identify harmful outputs, biases, and unintended behaviors in AI models.
Develop adversarial prompts and scenarios to probe AI systems for vulnerabilities related to misinformation, privacy, and security.
Categorize and document observed safety failures, including hallucination, toxicity, and discriminatory content generation.
Provide detailed, actionable feedback to AI developers on improving model robustness, alignment, and ethical performance.
Stay abreast of the latest research and methodologies in AI safety, interpretability, and responsible AI development.
Collaborate with red teamers and ethics reviewers to create a holistic safety assessment framework.
Ideal Qualifications
Strong analytical and critical thinking skills with a keen eye for detail.
Experience with prompt engineering and interacting with large language models (LLMs) or generative AI systems.
Familiarity with concepts in AI ethics, bias detection, and fairness metrics.
Ability to think creatively and anticipate unexpected failure modes in complex systems.
Excellent written communication skills for documenting findings and recommendations.
Background in computer science, philosophy, cognitive science, or a related field is a plus.
Project Timeline
Start Date: Immediate
Duration: Ongoing (Flexible, project-based)
• Commitment: Part-time, 15-30 hours/week
Be the guardian of safe AI – join our dedicated team!