Overview
Position: Senior Data Engineer
Experience: 5+ years
Qualification: Bachelor’s or master’s degree in computer science, Software Engineering, or a related field
Location: Gurgaon (Currently Work from Home till further notice)
We are looking for a highly skilled Senior Data Engineer to join our data team. The ideal candidate will have deep expertise in SQL, Python, and PySpark, along with experience working with Hive Metastore and basic proficiency in Java. Exposure to Databricks and modern data platforms is highly preferred. As a Senior Data Engineer, you will be instrumental in designing scalable data pipelines, optimizing data workflows, and enabling advanced analytics across the organization.
Key Responsibilities
- Design, develop, and optimize large-scale data processing pipelines using PySpark and SQL.
- Build and maintain ETL workflows that ingest data from multiple sources and transform it for business insights.
- Work closely with Data Architects, Analysts, and Scientists to understand data needs and deliver reliable data solutions.
- Leverage Databricks to implement scalable data lake and data warehouse solutions.
- Implement data quality checks and performance optimizations.
- Integrate with Hive Metastore and manage schema evolution effectively.
- Collaborate in Agile teams, contributing to sprint planning, code reviews, and continuous integration.
Preferred Skills and Qualifications
- 5+ years of experience in data engineering roles.
- Strong proficiency in SQL, Python, and PySpark.
- Hands-on experience with Apache Spark in a distributed environment.
- Solid understanding of Hive Metastore and its integration with Spark.
- Experience or exposure to Databricks (including Delta Lake, Unity Catalog, DBFS, etc.).
- Basic understanding and experience in Java for data integration tasks.
- Strong problem-solving skills with the ability to analyze complex data sets.
- Familiarity with Git, CI/CD tools, and DevOps for data engineering pipelines.
Nice to Have
- Experience with cloud platforms such as Azure, AWS, or GCP.
- Exposure to data governance tools and metadata management.
- Familiarity with data modeling techniques (Dimensional Modeling, Data Vault, etc.).
- Knowledge of streaming platforms like Kafka or Spark Structured Streaming.
- Understanding of data security practices such as masking, encryption, and RBAC.
Why Join Us?
- Work with a passionate and collaborative team.
- Build next-generation data platforms and pipelines.
Job Type: Full-time
Pay: ₹2,000,000.00 - ₹3,000,000.00 per year
Benefits:
- Provident Fund
- Work from home
Schedule:
- Monday to Friday
Application Question(s):
- 5+ years in a data engineering
Education:
- Bachelor's (Required)
Work Location: Remote