Overview
Bengaluru, Karnataka (Hybrid/Onsite)About Lyzr
2-5 years of experience
Lyzr is an enterprise AI agent platform helping companies deploy production-grade AI faster than anywhere else. We're growing fast across product, engineering, and go-to-market and we're looking for a People leader to grow and develop the function.
This is a rare chance to own the entire people's agenda at a company that's past the experimental stage but still early enough that your decisions will shape culture for years.
Role Overview
We are seeking a talented AI Engineer to join our Applied AI team in Bengaluru. This role focuses on designing, developing, and deploying production-ready AI agents and machine learning systems that power enterprise workflows. You'll work at the intersection of research and engineering, turning cutting-edge AI capabilities into reliable, scalable solutions that enterprises trust with their critical operations.
As an AI Engineer at Lyzr, you'll build agentic AI systems using LLMs, design agent orchestration patterns, implement RAG architectures, and optimize AI models for production environments. You'll collaborate closely with solutions architects, product managers, and customer engineering teams to ensure AI agents meet enterprise-grade requirements for accuracy, reliability, security, and observability.
Key Responsibilities
- AI Agent Development: Design, develop, and implement autonomous AI agents using LLMs, Lyzr agent frameworks, and custom orchestration logic to solve complex enterprise workflows.
- Data Pipeline Design: Build robust data pipelines for data ingestion, preprocessing, feature engineering, embedding generation, and vector database management to support AI agent operations.
- Agent Orchestration & Workflow Automation: Design multi-agent systems with complex workflows, conditional logic, human-in-the-loop controls, and integration with enterprise systems (CRM, ERP, databases, APIs).
- Production Deployment: Deploy AI models and agents to production environments (cloud, on-premise) using containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Ensure scalability and reliability.
- Research & Innovation: Stay current with latest AI research, LLM advancements, agent frameworks, and emerging techniques. Prototype and evaluate new approaches to improve agent capabilities.
- Cross-Functional Collaboration: Work with solutions architects, product managers, and engineers to define requirements, align on technical approaches, and deliver integrated solutions.
- Code Quality & Documentation: Write clean, maintainable, and well-documented code. Conduct code reviews, create technical specifications, and maintain comprehensive documentation for AI systems.
- Testing & Validation: Implement rigorous testing frameworks for AI models and agents including unit tests, integration tests, agent simulation, adversarial testing, and performance benchmarking.
- Security & Compliance: Implement security best practices including data encryption, PII redaction, access controls, and compliance with enterprise security requirements (SOC2, GDPR, HIPAA).
- Experience building and deploying production agentic AI systems or multi-agent applications.
- Advanced knowledge of RAG architectures, hybrid search, reranking, and context optimization techniques.
- Experience with MLOps tools and practices including MLflow, Kubeflow, model versioning, A/B testing, and continuous training pipelines.
- Proficiency in additional programming languages such as JavaScript/TypeScript, Java, or Go.
- Experience with NLP techniques, transformer architectures, and fine-tuning LLMs for specific domains or tasks.
- Understanding of reinforcement learning, particularly RLHF (Reinforcement Learning from Human Feedback) for LLM alignment.
- Experience with containerization (Docker) and orchestration (Kubernetes) for ML model deployment.
- Knowledge of enterprise integration patterns, APIs, microservices, and event-driven architectures.
- Familiarity with BFSI domain, banking workflows, insurance processes, or other enterprise verticals.
- Experience implementing AI governance, bias detection, explainability (XAI), and responsible AI practices.
- Background in distributed systems, parallel processing, or high-performance computing for ML workloads.
- Machine learning and deep learning
- LLM application development and prompt engineering
- Agentic AI and multi-agent systems
- Python programming and software engineering
- Data engineering and preprocessing
- RAG architecture and vector search
- NLP and conversational AI
- Cloud infrastructure and MLOps
- API development and microservices
- Problem-solving and algorithmic thinking
- Collaborative development and code review
- Technical documentation and communication
- Languages: Python (primary), SQL, JavaScript/TypeScript
- ML Frameworks: PyTorch, TensorFlow, scikit-learn, Hugging Face Transformers
- Agent Frameworks: Lyzr, LangChain, LangGraph, LlamaIndex, custom orchestration
- LLMs: Open AI (GPT-5), Anthropic (Claude), Google (Gemini), open-source models (Llama, Mistral)
- Vector Databases: Pinecone, Weaviate, Chroma, FAISS, Qdrant
- Cloud Platforms: AWS (SageMaker, Bedrock, Lambda, EC2), Azure (Azure AI, ML Studio), GCP (Vertex AI)
- Data Tools: pandas, NumPy, Apache Spark, data pipeline frameworks
- MLOps: Docker, Kubernetes, MLflow, CI/CD (GitHub Actions, Jenkins), monitoring tools
- Databases: PostgreSQL, MongoDB, Redis, vector databases
- APIs: REST, GraphQL, FastAPI, Flask, message queues (Kafka, RabbitMQ)
- Successfully developing and deploying production AI agents that handle enterprise workflows with high accuracy and reliability
- Reducing agent response time and improving task automation rates for customer deployments
- Contributing to reusable agent components and frameworks that accelerate future development
- Maintaining high code quality with comprehensive testing and documentation
- Collaborating effectively across teams to deliver integrated AI solutions
- Staying current with AI research and bringing innovative approaches to product development
- Cutting-Edge AI: Work with the latest LLMs, agent frameworks, and generative AI technologies in production environments.
- Real-World Impact: Build AI agents that automate critical enterprise workflows and deliver measurable business value.
- Enterprise Scale: Deploy AI systems that serve Fortune 500 companies and handle production workloads at scale.
- Learning & Growth: Work alongside experienced AI researchers and engineers. Access to latest research, training, and conferences.
- Ownership: Take end-to-end ownership of AI agent features from design through deployment and optimization.
- Innovation Culture: Encouraged to prototype new approaches, experiment with emerging techniques, and contribute to research.
- Responsible AI: Build AI systems with built-in governance, observability, and ethical considerations – not as an afterthought.
- Technical Excellence: Emphasis on code quality, testing, documentation, and engineering best practices.