AI Red-Teamer (English & Arabic)
Job Description
The safety and reliability of AI systems in diverse linguistic contexts are paramount. Your expertise in Arabic and English will directly contribute to identifying critical vulnerabilities, ensuring AI models are robust, fair, and culturally appropriate for millions of users. This role is crucial for building trust in next-generation AI.
Key Responsibilities
Proactively identify and exploit potential failure modes, biases, and harmful outputs in AI models across Arabic and English.
Design and execute adversarial prompts and test cases to uncover vulnerabilities related to cultural nuances, political sensitivities, and linguistic ambiguities in both languages.
Document detailed findings, including reproduction steps, impact assessment, and suggested remediation for identified safety issues.
Collaborate with AI researchers to refine testing methodologies for Arabic-English bilingual contexts.
Analyze AI responses for factual accuracy, coherence, and adherence to ethical guidelines in both languages.
Ideal Qualifications
Native or near-native fluency in both Modern Standard Arabic and English, with strong cultural understanding.
Demonstrated experience in red-teaming, penetration testing, or adversarial testing of software systems (AI experience a plus).
Familiarity with common AI safety concerns (e.g., hallucination, bias, toxicity, privacy violations).
Excellent analytical skills and ability to articulate complex issues clearly in written reports.
Experience with prompt engineering techniques and understanding of large language model (LLM) behavior.
Background in linguistics, cybersecurity, or ethical hacking is highly valued.
Project Timeline
Start Date: Immediate
Duration: Ongoing (minimum 3 months commitment)
Commitment: Flexible, 15-25 hours per week
Join us in shaping the future of safe and responsible AI for the Arabic-speaking world!