AI Red-Teamer (English & Hindi)
Job Description
The responsible deployment of AI in linguistically diverse regions like India requires meticulous safety validation. Your proficiency in Hindi and English will be crucial in identifying and mitigating biases, cultural misinterpretations, and harmful outputs in AI systems, ensuring they are safe and effective for millions of Hindi speakers. This role directly contributes to equitable AI development.
Key Responsibilities
Proactively identify and exploit potential failure modes, biases, and harmful outputs in AI models across Hindi and English.
Design and execute adversarial prompts and test cases to uncover vulnerabilities related to cultural nuances, regional sensitivities, and linguistic ambiguities in both languages.
Document detailed findings, including reproduction steps, impact assessment, and suggested remediation for identified safety issues.
Collaborate with AI researchers to refine testing methodologies for Hindi-English bilingual contexts.
Analyze AI responses for factual accuracy, coherence, and adherence to ethical guidelines, especially concerning Indian cultural contexts.
Ideal Qualifications
Native or near-native fluency in Hindi (Devanagari script) and English, with strong cultural understanding of India.
Demonstrated experience in linguistic quality assurance, content moderation, or adversarial testing of software systems (AI experience a plus).
Familiarity with common AI safety concerns (e.g., hallucination, bias, toxicity, privacy violations).
Excellent analytical skills and ability to articulate complex issues clearly in written reports.
Experience with prompt engineering techniques and understanding of large language model (LLM) behavior.
Background in linguistics, Indian studies, or ethical hacking is highly valued.
Project Timeline
Start Date: Immediate
Duration: Ongoing (minimum 3 months commitment)
Commitment: Flexible, 15-25 hours per week
Join us in shaping the future of safe and responsible AI for the Hindi-speaking world!