Bangalore, Karnataka, India
Information Technology
Full-Time
Litmus7
Overview
Job Description
We are looking for a seasoned Senior Data Engineer with a passion for building scalable, reliable, and cutting-edge data solutions on cloud platforms. The ideal candidate will bring deep expertise in Google Cloud Platform (GCP), BigQuery, and modern data engineering practices, including experience with ingestion and transformation tools, as well as proficiency in medallion architecture to drive data quality and governance.
Experience with Databricks on AWS is highly valued and considered a distinct advantage.
As a key member of our data engineering team, you will architect and implement end-to-end data pipelines that power actionable business intelligence, advanced analytics, and machine learning initiatives. This role offers the opportunity to work with diverse data ecosystems and influence the future of our data infrastructure.
Key Responsibilities
Mentor junior engineers and contribute to developing best practices and documentation for data engineering processes.
We are looking for a seasoned Senior Data Engineer with a passion for building scalable, reliable, and cutting-edge data solutions on cloud platforms. The ideal candidate will bring deep expertise in Google Cloud Platform (GCP), BigQuery, and modern data engineering practices, including experience with ingestion and transformation tools, as well as proficiency in medallion architecture to drive data quality and governance.
Experience with Databricks on AWS is highly valued and considered a distinct advantage.
As a key member of our data engineering team, you will architect and implement end-to-end data pipelines that power actionable business intelligence, advanced analytics, and machine learning initiatives. This role offers the opportunity to work with diverse data ecosystems and influence the future of our data infrastructure.
Key Responsibilities
- Architect, develop, and maintain robust ETL/ELT data pipelines on GCP, primarily leveraging BigQuery as a highly scalable data warehouse.
- Design and implement data ingestion frameworks utilizing tools such as Google Cloud Dataflow, Pub/Sub, Kafka, Fivetran, or Talend, ensuring efficient, real-time or batch data capture from multiple sources.
- Lead and optimize data transformation processes using SQL, dbt (data build tool), Apache Spark, or Dataflow to implement layered data architectures (Bronze, Silver, Gold) following medallion architecture principles for improving data quality and maintainability.
- Collaborate cross-functionally with data scientists, analysts, and business teams to translate requirements into functional, scalable data assets.
- Build and automate complex workflows using orchestration tools such as Google Cloud Composer, Apache Airflow, ensuring reliable and observable pipelines.
- Manage data governance, lineage, and security practices to ensure compliance with internal policies and regulatory standards (GDPR, CCPA).
- Conduct root cause analysis and troubleshoot data issues to maintain high data integrity and pipeline uptime.
- Drive cloud cost optimization strategies related to data storage and compute resources.
Mentor junior engineers and contribute to developing best practices and documentation for data engineering processes.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in