
Overview
Position: Data Engineer – MS Fabric
Purpose of the Position: As an MS Fabric Data engineer you will be responsible for designing, implementing, and managing scalable data pipelines. Strong experience in implementation and management of lake House using MS Fabric Azure Tech stack (ADLS Gen2, ADF, Azure SQL) .
Proficiency in data integration techniques, ETL processes and data pipeline architectures. Well versed in Data Quality rules, principles and implementation.
Location: Bangalore/ Pune/ Nagpur/ Chennai
Type of Employment: FTE
Key Result Areas and Activities:
1. Data Pipeline Development & Optimization
- Design and implement data pipelines using MS Fabric.
- Manage and optimize ETL processes for data extraction, transformation, and loading.
- Conduct performance tuning for data storage and retrieval to enhance efficiency.
2. Data Quality, Governance & Documentation
- Ensure data quality and integrity across all data processes.
- Assist in designing data governance frameworks and policies.
- Generate and maintain documentation for data architecture and data flows.
3. Cross-Functional Collaboration & Requirement Gathering
- Collaborate with cross-functional teams to gather and define data requirements.
- Translate functional and non-functional requirements into system specifications.
4. Technical Leadership & Support
- Provide technical guidance and support to junior data engineers.
- Participate in code reviews and ensure adherence to coding standards.
- Troubleshoot data-related issues and implement effective solutions.
Technical Experience:
Must Have:
- Proficient in MS Fabric, Azure Data Factory, and Azure Synapse Analytics with deep knowledge of Fabric components like writing Notebook, Lakehouses, OneLake, Data Pipelines, and Real-Time Analytics.
- Skilled in integrating Fabric capabilities for seamless data flow, governance, and cross-team collaboration.
- Strong grasp of Delta Lake, Parquet, distributed data systems, and various data formats (JSON, XML, CSV, Parquet).
- Experienced in ETL/ELT processes, data warehousing, data modeling, and data quality frameworks.
- Proficient in Python, PySpark, Scala, Spark SQL, and T-SQL for complex data transformations.
- Familiar with Agile methodologies and tools like JIRA, with hands-on experience in monitoring tools and job scheduling.
Good To Have:
- Familiarity with Azure cloud platforms and cloud data services
- MS Purview, Open Source libraries like Dequee, Pydequee, Great Expectation for DQ implementation
- Develop data models to support business intelligence and analytics
- Experience with PowerBI dashboard
- Experience with Databricks
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- 5+ years of experience in MS Fabric/ADF/Synapse
Qualities:
- Experience with or knowledge of Agile Software Development methodologies.
- Able to consult, write, and present persuasively.
India
5 to 9 years