Overview
About the Role
Zime AI is building an AI-native behaviour and revenue intelligence platform. Our systems power real-time AI sales playbooks, conversation analysis, and workflow automation used by revenue teams at scale. We're looking for an AI Engineer to join our AI Pod team in Bangalore. You'll own GenAI features end-to-end—from prompt engineering to production deployment—working on systems that directly impact how sales teams close deals. This is a builder role where your LLM expertise will shape product capabilities.
Key Responsibilities
- Design, build, and own GenAI features for sales intelligence products (AMA, AI Playbook Automation)
- Develop and optimize prompt templates and chains for LLM applications, ensuring high-quality, reliable outputs
- Implement RAG (Retrieval Augmented Generation) pipelines with vector search and semantic retrieval
- Build and maintain integrations with LLM APIs (OpenAI, Anthropic, etc.) and orchestration frameworks
- Own AI system performance monitoring using observability tools (Langfuse, etc.)
- Collaborate on system architecture decisions and technical tradeoffs for AI features
- Deploy and maintain AI services in production (Kubernetes, FastAPI backends)
- Evaluate and improve LLM outputs through systematic testing and iteration
What You Will Do Day-to-Day
- Write and refine prompts for sales conversation analysis and insights generation
- Debug LLM behavior issues and optimize for latency, cost, and quality
- Build FastAPI endpoints that integrate AI capabilities with the product
- Review pull requests and pair with team members on AI features
- Monitor production AI systems and respond to quality or performance issues
- Experiment with new LLM techniques, models, and frameworks
- Participate in daily standups and collaborate with the AI Pod team
Requirements (Non-Negotiable)
- 5-6 years of experience in software engineering with strong focus on GenAI/LLM applications
- Bachelor's or Master's degree in Computer Science from a top-tier university
- Hands-on experience building production LLM applications and prompt engineering
- Proficiency in Python with experience in FastAPI, Django, or similar frameworks
- Working knowledge of vector databases (Pinecone, Weaviate, Qdrant, Chroma) and embedding models
- Experience with LLM orchestration frameworks (LangChain, LangGraph, or similar)
- Understanding of RAG architectures and semantic search
- Familiarity with Docker, Kubernetes, and cloud platforms (AWS/GCP)
- Strong problem-solving skills and ability to work independently with ownership
Good to Have
- Experience fine-tuning LLMs or working with open-source models (Llama, Mistral, etc.)
- Knowledge of LLM evaluation frameworks and systematic prompt testing
- Experience with MongoDB, PostgreSQL, or Elasticsearch
- Familiarity with observability and monitoring tools (Langfuse, Weights & Biases, Langsmith)
- Understanding of sales processes, CRM systems, or B2B SaaS products
- Prior startup experience with hands-on ownership of features
- Public GitHub contributions or technical writing on AI/ML topics
What Success Looks Like
- High-quality, reliable AI features shipped to production that users love
- Reduced prompt iteration cycles through systematic optimization approaches
- Improved LLM output quality metrics (accuracy, relevance, consistency)
- Strong collaboration with engineering and product teams on AI roadmap
- Clear ownership and fast resolution of production AI issues
- Technical mentorship and knowledge sharing within the team
Why Join Us
- Work closely with the AI Pod lead on core product and technical decisions
- Build real-world GenAI systems serving enterprise customers at scale
- High ownership with minimal bureaucracy—ship features fast
- Direct impact on product capabilities and company growth
- Competitive compensation and equity
- Bangalore office with a focused, collaborative team
Location
Bangalore (On-site)
Experience Level
5-6 years
To Apply: Share your resume along with examples of GenAI projects you've built (GitHub repos, deployed applications, or detailed descriptions of your LLM work).