
Overview
Role description
Job Title: Senior Data Engineer / Technical Lead (AWS & ETL)
Job Summary:
We are looking for a skilled and proactive Data Engineer/Technical Lead to design, build, and manage cloud-based data pipelines. You'll work with AWS services, Python, and ETL tools to ensure smooth data processing, while also guiding your team and supporting project execution.
Key Responsibilities:
1. Develop & Maintain Applications:
Write clean, efficient, and well-tested code.
Build and maintain ETL pipelines using Python and AWS tools.
Reuse and improve existing components when possible.
2. Cloud Data Work:
Set up and manage data replication from DB2 to AWS Aurora using Qlik Replicate and AWS DMS.
Process and store data in the cloud using AWS Lambda, Glue, and Apache Hudi.
Use Terraform to manage AWS infrastructure as code (IaC).
3. Monitoring & Support:
Monitor data pipelines using CloudWatch, Splunk, and Dynatrace.
Troubleshoot issues and ensure data accuracy and system reliability.
4. Testing & Quality:
Create and review unit tests.
Work with the testing team to ensure quality standards are met.
Reduce bugs and improve performance.
5. Team & Project Support:
Help manage project tasks and timelines.
Support and mentor team members.
Set goals, track progress, and support team development and retention.
6. Work with Customers:
Understand customer needs and clarify requirements.
Present solutions and demos to clients.
Coordinate with customer architects for finalizing technical designs.
Mandatory Skills:
Python (strong experience)
AWS Services: Lambda, S3, EC2, Glue, DMS, Aurora, CloudWatch, MWAA, EMR Serverless
ETL Pipelines (designing and managing)
Qlik Replicate (data replication)
Terraform (intermediate level for infrastructure automation)
Nice-to-Have Skills:
Apache Hudi for data lake storage
Experience with Dynatrace and Splunk for monitoring
Knowledge of security best practices in AWS
Experience in performance tuning and design patterns
Skills
Python,Aws Services,ETL Pipelines