Overview
If you've spent years testing AI/ML models and understand how they fail, drift, and get exploited, keep reading. If your experience is limited to manual testing or conventional automation without AI/ML exposure, this one's not for you.
We're looking for an AI/ML Test Engineer who thinks like an adversary. Someone who can spot bias, catch drift, and figure out how systems can be gamed before bad actors do.
What You'll Do
Validate AI/ML models for accuracy, performance, and fairness
Test for bias, explainability, and transparency issues
Design edge-case, adversarial, and drift detection tests
Identify abuse scenarios (gaming recommendations, exploiting scoring gaps, biasing fraud models)
Automate AI/ML testing and integrate into CI/CD pipelines
Communicate results to Data Scientists, Engineers, and Business teams
What You'll Need
5+ years in Software Testing/QA
2+ years hands-on testing AI/ML models
Experience with NLP systems, recommendation engines, fraud detection, or scoring models
Strong skills in validating accuracy, bias, and explainability
Proficiency with PyTest, TensorFlow Model Analysis, MLflow, Jupyter Notebooks
Experience with CI/CD integration for AI/ML tests
Clear communication skills for technical and non-technical audiences
Nice to have: ISTQB AI Testing, TensorFlow, or Cloud AI certifications
Why Codelynks?
You'll work on AI systems where quality actually matters. No checkbox QA here. Your work protects the integrity of intelligent solutions that people rely on