Information Technology
Full-Time
MagellanicCloud
Overview
Responsibilities
- Design, develop, and maintain scalable and efficient ETL pipelines using Azure Data Factory (ADF).
- Leverage PySpark and Databricks to process and transform large datasets.
- Build and optimize data models and data warehouse solutions on Azure.
- Write complex SQL queries for data extraction, manipulation, and analysis.
- Implement data quality checks and monitoring to ensure data accuracy and reliability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver robust data solutions.
- Troubleshoot and resolve data pipeline issues and performance bottlenecks.
- Stay up-to-date with the latest advancements in data engineering technologies and best practices.
- Contribute to the development and maintenance of data engineering standards and Skills :
- Strong proficiency in PySpark for large-scale data processing.
- Extensive experience with Microsoft Azure cloud services.
- Hands-on expertise in building ETL pipelines using Azure Data Factory (ADF).
- Proven experience with Databricks for data engineering and analytics.
- Solid understanding of ETL (Extract, Transform, Load) concepts and methodologies.
- Excellent SQL skills for data querying and :
- Bachelor's degree in Engineering (B.) or Bachelor of Technology (B.Tech) in Computer Science, Information Technology, or a related field
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in