Overview
Ø 1–2 years of experience in GenAI evaluation testing
Ø 2+ years of hands-on Playwright experience (JavaScript/TypeScript) .
Ø Total 6+ years of experience in software quality engineering/testing.
Ø Leverage AI tools (e.g., Cursor, copilots) to accelerate test creation, generate rules/checks dynamically, and improve coverage.
Ø Evaluate AI-generated content for clarity, factual accuracy, inclusivity, and bias minimization; build reusable eval test cases for generative features.
Ø Advocate for privacy, security, and transparency in AI involvement, aligning with ethical AI best practices.
Ø Experience with AI evaluation metrics and pipelines (semantic similarity, bias/toxicity detection, hallucination checks).
Ø Comfort with AI productivity tools (e.g., Cursor, Playwright MCP servers, AI copilots) to accelerate testing workflows.
Desired Candidate Profile