Hyderabad, Telangana, India
Information Technology
Full-Time
LeewayHertz
Overview
Job Description
Job Description :
We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting
modern data architectures such as CEDM and Data Vault 2.0.
Responsibilities
Essential Skills:
Job
Job
check(event) ; career-website-detail-template-2 => apply(record.id,meta)">
Job Description :
We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting
modern data architectures such as CEDM and Data Vault 2.0.
Responsibilities
- Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF).
- Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling.
- Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer).
- Develop and maintain bronze → silver → gold data layers using DBT or Coalesce.
- Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery.
- Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata.
- Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams).
- Work closely with QA teams to integrate test automation and ensure data quality.
- Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases.
- Document architectures, pipelines, and workflows for internal stakeholders.
Essential Skills:
Job
- Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid).
- Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python.
- Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts.
- Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling.
- Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers.
- Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX).
- Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection.
- Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates.
- Experienced in data validation and exploratory data analysis with pandas profiling,
- AWS Glue Data Quality, and Great Expectations.
- Excellent communication and interpersonal skills, with the ability to engage with teams.
- Strong problem-solving, decision-making, and conflict-resolution abilities.
- Proven ability to work independently and lead cross-functional teams.
- Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism.
- Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion.
- The candidate must have strong work ethics and trustworthiness
- Must be highly collaborative and team oriented with commitment to excellence.
Job
- Proficiency in SQL and at least one programming language (e.g., Python, Scala).
- Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services.
- Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica).
- Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs).
- Experience with data modeling, data structures, and database design.
- Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake).
- Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- Demonstrate proactive thinking
- Should have strong interpersonal relations, expert business acumen and mentoring skills
- Have the ability to work under stringent deadlines and demanding client conditions
- Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach
- Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field.
- 9+ years of experience in data engineering and data architecture.
check(event) ; career-website-detail-template-2 => apply(record.id,meta)">
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in