Overview
About the job:
We're looking for a DevOps Engineer who lives and breathes systems, automation, and efficiency. You'll bridge data science, engineering, and deployment, ensuring our AI-driven solutions move seamlessly from code to production. This is a hands-on role for someone who thrives in fast-paced environments, enjoys solving complex problems, and wants to build things that work and make a real impact.Key Responsibilities:
1. Build, manage, and optimize CI/CD pipelines for AI-driven products.
2. Collaborate with data scientists to deploy ML models into production.
3. Automate infrastructure and streamline deployment using scripting tools.
4. Monitor and improve system performance with observability tools.
5. Design scalable, secure, and efficient DevOps/MLOps systems that grow with our products.
What You'll Gain:
1. Hands-on experience building real-world AI and automation systems for India's MSME sector.
2. Opportunities to experiment with AI, IoT, and VR on live projects.
3. A culture that values curiosity, learning, and ethical innovation.
4. The chance to shape how small businesses in India adopt AI and see your work make a tangible impact.
We're not looking for someone to just maintain systems we're looking for someone who wants to move them forward. If that sounds like you, you're our kind of engineer. Let's build the tech that builds India.
Who can apply:
- have minimum 1 years of experience
- are Computer Science Engineering students
Only those candidates can apply who:
Salary:
₹ 3,60,000 - 7,20,000 /yearExperience:
1 year(s)Deadline:
2026-01-05 23:59:59Skills required:
Python, Docker, Microsoft Azure, Kubernetes, Google Cloud Platforms (GCP), CI/CD and TerraformOther Requirements:
1. 1-5 years of experience in DevOps, MLOps, or systems engineering.
2. At least 1 year of experience with Google Cloud Platform (GCP).
3. Strong knowledge of Microsoft Azure (especially Azure ML, Azure DevOps) & Google Cloud Platform (GCP).
4. Proficiency in Python or scripting languages (Shell, Bash, PowerShell, or Perl).
5. Solid grasp of CI/CD, containerization (Docker, Kubernetes), and tools like Jenkins, GitHub Actions, or ArgoCD.
6. Familiarity with monitoring tools (Prometheus, Grafana, Datadog, or New Relic).
7. Understanding of data pipelines (SQL, NoSQL, ETL).
8. Exposure to MLOps tools such as Kubeflow, MLflow, or Weights & Biases.
9. Experience with AWS or other cloud environments.
10. Worked with ML model lifecycles or AIOps tools (PagerDuty, BigPanda, etc.).
11. Knowledge of model serving frameworks (TensorFlow Serving, TorchServe).
12. Awareness of security and governance in ML systems.
About Company:
We are a boutique software engineering services company with expertise in mobile and web app development. We engage with startups throughout their product development lifecycle and deliver software products on time using cutting-edge technologies. Startups trust us because we specialise in rapid development and use strong and tested frameworks that can work across scales. We are a boutique engineering services company providing technology solutions in IoT and deep learning for our clients worldwide.