AI Red-Teamer (English & Korean)
Job Description
As AI models become increasingly sophisticated, their safe and ethical deployment in culturally rich linguistic environments like Korean is paramount. Your expertise in Korean and English will be vital in stress-testing AI systems, uncovering subtle biases, and preventing harmful outputs that could impact user trust and societal well-being. This is a high-impact role at the cutting edge of AI safety.
Key Responsibilities
Design and execute adversarial test cases to identify and exploit vulnerabilities in AI models, specifically focusing on Korean-English bilingual interactions.
Uncover instances of cultural insensitivity, political bias, or inappropriate content generation within Korean linguistic contexts.
Provide detailed, actionable reports on identified safety issues, including linguistic analysis and cultural context.
Contribute to the development of robust red-teaming methodologies tailored for the unique complexities of the Korean language and culture.
Analyze AI responses for accuracy, coherence, and appropriate use of honorifics and social registers in Korean.
Ideal Qualifications
Native or near-native fluency in Korean (including Hangul proficiency) and English, with a deep understanding of Korean culture and societal norms.
Experience in linguistic validation, content moderation, or security testing, ideally with exposure to AI/ML systems.
Exceptional analytical skills to deconstruct AI outputs and pinpoint subtle biases or potential safety risks.
Familiarity with common AI safety challenges (e.g., misinformation, toxicity, privacy, fairness).
Ability to meticulously document findings and communicate complex issues clearly.
Academic background in Korean studies, linguistics, or cybersecurity is highly regarded.
Project Timeline
Start Date: Within 1 week
Duration: Ongoing (minimum 3 months)
Commitment: Flexible, 15-25 hours per week
Join us in ensuring AI is safe, fair, and culturally aware for Korean speakers worldwide!