Information Technology
Full-Time
Primary Care Plus
Overview
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together.
As a Senior Data Engineer at Optum you’ll help us work on streamlining the flow of information and deliver insights to manage our various Data Analytics web applications which serve internal and external customers. This specific team is working on features such as OpenAI API integrations, working with customers to integrate disparate data sources into useable datasets, and configuring databases for our web application needs. Your work will contribute to lowering the overall cost of healthcare for our consumers and helping people live healthier lives.
Primary Responsibilities
Required Qualifications
External Candidate Application
Internal Employee Application
As a Senior Data Engineer at Optum you’ll help us work on streamlining the flow of information and deliver insights to manage our various Data Analytics web applications which serve internal and external customers. This specific team is working on features such as OpenAI API integrations, working with customers to integrate disparate data sources into useable datasets, and configuring databases for our web application needs. Your work will contribute to lowering the overall cost of healthcare for our consumers and helping people live healthier lives.
Primary Responsibilities
- Data Pipeline Development: Develop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system, such as a data warehouse or data lake. Ensure the smooth flow of data from source systems to destination systems while adhering to data quality and integrity standards
- Data Integration: Integrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers. Handle data ingestion, transformation, and consolidation to create a unified and reliable data foundation for analysis and reporting
- Data Transformation and Processing: Develop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks
- Maintain and enhance existing application databases to support our many Data Analytic web applications, as well as working with our web developers on new requirements and applications
- Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines
- Implement data governance in line with company standards
- Partner with Data Analytics and Product leaders to design best practices and standards for developing productional analytic pipelines
- Partner with Infrastructure leaders on architecture approaches to advance the data and analytics platform, including exploring new tools and techniques that leverage the cloud environment (Azure, Snowflake, others)
- Monitoring and Support: Monitor data pipelines and data systems to detect and resolve issues promptly. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Required Qualifications
- Extensive hands-on experience in developing data pipelines that demonstrate a solid understanding of software engineering principles
- Proficiency in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines
- Solid understanding of software engineering principles (micro-services applications and ecosystems)
- Fluent in SQL (Snowflake/SQL Server), with experience using Window functions and more advanced features
- Understanding of DevOps tools, Git workflow and building CI/CD pipelines
- Solid understanding of Airflow
- Proficiency in design and implementation of pipelines and stored procedures in SQL Server and Snowflake
- Demonstrated ability to work with business and technical audiences on business requirement meetings, technical white boarding exercises, and SQL coding or debugging sessions
- Bachelor’s Degree or higher in Database Management, Information Technology, Computer Science or similar
- Experience with Azure Data Factory or Apache Airflow
- Experience with Azure Databricks or Snowflake
- Experience working in projects with agile/scrum methodologies
- Experience with shell scripting languages
- Experience working with Apache Kafka, building appropriate producer or consumer apps
- Experience with production quality ML and/or AI model development and deployment
- Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform)
External Candidate Application
Internal Employee Application
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in