Delhi, Delhi, India
Information Technology
Full-Time
qpact
Overview
We are looking for a highly skilled Senior Data Engineer with 710 years of professional experience in data engineering and cloud-based data platforms. The ideal candidate will be proficient in Azure technologies and have strong expertise in data architecture, data integration, data modeling, and advanced ETL/ELT pipelines. You will play a key role in building scalable, reliable, and high-performing data solutions that enable advanced analytics and business insights.
Key Responsibilities
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and data integration workflows using Azure Data Factory, Azure Databricks, and Azure Synapse.
- Implement data warehouse and lakehouse solutions leveraging medallion architecture and modern best practices.
- Perform data profiling, anomaly detection, and quality checks to ensure reliability and accuracy of data.
- Design and optimize ETL/ELT processes for batch and real-time data ingestion.
- Build robust data models (Dimensional/Kimball) for analytics, BI, and reporting solutions.
- Optimize and manage relational database systems for performance, scalability, and availability.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
- Drive the adoption of data engineering best practices, coding standards, and agile methodologies.
- Implement CI/CD pipelines and data DevOps practices for automated deployments and monitoring.
- Ensure compliance with data governance, security, and privacy policies.
- Mentor junior engineers, conduct code reviews, and contribute to knowledge-sharing across the team.
- Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or related field.
- 7 - 10 years of professional experience in data engineering.
- Azure Data Factory (ETL/ELT workflows)
- Azure Databricks (big data processing, PySpark/Scala)
- Azure Synapse Analytics (data warehousing and analytics)
- Strong knowledge of data warehouse and lakehouse methodologies including medallion architecture.
- Hands-on experience with ETL/ELT pipelines, data integration, data profiling, and anomaly detection.
- Expertise in SQL and relational databases (SQL Server, Oracle, PostgreSQL, or equivalent).
- Solid understanding of data modeling techniques (Dimensional/Kimball, Star/Snowflake
- Experience working in Agile/Scrum environments.
- Strong understanding of DevOps practices, CI/CD pipelines, and automated deployments.
- Excellent problem-solving, analytical, and communication skills.
- Experience with cloud-native data platforms (AWS Redshift, GCP BigQuery) in addition to Azure.
- Familiarity with streaming data technologies (Kafka, Event Hubs, Spark Streaming).
- Knowledge of data governance frameworks, data security, and compliance (GDPR, HIPAA, etc.).
- Exposure to machine learning pipelines and integration with advanced analytics platforms.
- Experience with Python, PySpark, or Scala for data engineering and automation.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in