Bangalore, Karnataka, India
Information Technology
Other
Optum India
Overview
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
Primary Responsibilities
- Design, develop, and support scalable data engineering and AI-enabled application solutions using Apache Spark, Scala, and Python
- Build, optimize, and maintain high-performance batch and near real-time data pipelines for ingestion, transformation, and analytics across large-scale datasets
- Develop and manage cloud-based data workflows on AWS, with hands-on use of services such as Amazon EMR, AWS Step Functions, AWS Lambda and Amazon OpenSearch
- Contribute to the design and deployment of AI agents and Generative AI solutions using platforms such as Google Gemini and AWS Bedrock
- Implement Retrieval-Augmented Generation (RAG) architectures by integrating large language models with enterprise data sources, search capabilities, and knowledge repositories
- Work with modern AI integration standards and technologies, including MCP (Model Context Protocol) and related agentic AI frameworks
- Collaborate closely with cross-functional teams including product management, data science, platform engineering, and business stakeholders to deliver scalable and reliable solutions
- Ensure application performance, reliability, maintainability, and operational excellence across data and AI systems
- Participate in technical design discussions, architecture reviews, code reviews, and engineering best practice initiatives
- Troubleshoot production issues, perform root cause analysis, and implement corrective and preventive measures
- Support automation, monitoring, deployment, and continuous improvement initiatives across the engineering lifecycle
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
- Bachelor's degree in Computer Science, Engineering, or a related technical discipline
- 3+ years of professional software engineering experience, with a solid focus on data engineering, big data systems, AI platforms, or backend application development
- Solid hands-on experience with Scala and Apache Spark in developing distributed data processing applications
- Hands-on experience working with AWS cloud services, particularly:
- Amazon EMR
- AWS Step Functions
- Amazon OpenSearch
- Experience or working knowledge in building and deploying AI agents or Generative AI applications using Google Gemini, AWS Bedrock, or similar LLM platforms
- Good understanding of RAG concepts, vector-based retrieval, prompt orchestration, and enterprise knowledge integration
- Familiarity with MCP (Model Context Protocol) and awareness of emerging AI and agentic application technologies
- Solid understanding of distributed systems, data pipeline architecture, ETL/ELT design, and performance optimization techniques
- Good knowledge of SQL, data modeling, and processing of structured and unstructured data
- Familiarity with software engineering best practices including version control, automated testing, code reviews, documentation, and secure development principles
- Proficiency in Python for data engineering, workflow automation, and AI/ML-related development
- Proven solid analytical, problem-solving, and debugging capabilities
- Proven effective communication and collaboration skills, with the ability to work successfully in a team-oriented environment
- Experience with Apache Kafka for streaming, event-driven, or asynchronous data architectures
- Experience with Docker, Kubernetes, or similar containerization and orchestration technologies
- Exposure to DevOps practices and related tools for CI/CD, deployment automation, infrastructure management, and monitoring
- Good working knowledge of shell scripting for operational automation and workflow support
- Solid Unix/Linux command-line knowledge and system troubleshooting skills
- Familiarity with workflow orchestration and scheduling platforms used in modern data ecosystems
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in