Chennai, Tamil Nadu, India
Information Technology
Full-Time
Cognine Technologies
Overview
3-6
Hyderabad
Full-Time
Data Engineer (Azure Data Factory and Azure Databricks)
Job Description
We seek a Data Engineer with at least 5 years of experience in Azure Databricks, Azure Data Lake Storage gen2 (ADLS), and Azure Data Factory (ADF). The candidate will design, build, and maintain data pipelines using Azure ADF/Databricks. The candidate should be able to create Data Modelling & Governance, they will also work closely with our data science and engineering teams to develop and deploy machine learning models and other advanced analytics solutions.
Primary skills: Azure Data Factory, Azure Data Bricks, SQL, ADLS, Spark-SQL, Python (Pandas), Pyspark or Scala
Secondary Skills: Basics of Azure Security (RBAC, Azure AD), Hadoop, HDFS, ADLS, Azure DBFS, PowerBI, and Tableau visualization tool is a plus.
Responsibilities
View all job openings
Hyderabad
Full-Time
Data Engineer (Azure Data Factory and Azure Databricks)
Job Description
We seek a Data Engineer with at least 5 years of experience in Azure Databricks, Azure Data Lake Storage gen2 (ADLS), and Azure Data Factory (ADF). The candidate will design, build, and maintain data pipelines using Azure ADF/Databricks. The candidate should be able to create Data Modelling & Governance, they will also work closely with our data science and engineering teams to develop and deploy machine learning models and other advanced analytics solutions.
Primary skills: Azure Data Factory, Azure Data Bricks, SQL, ADLS, Spark-SQL, Python (Pandas), Pyspark or Scala
Secondary Skills: Basics of Azure Security (RBAC, Azure AD), Hadoop, HDFS, ADLS, Azure DBFS, PowerBI, and Tableau visualization tool is a plus.
Responsibilities
- Design, build, and maintain data pipelines using Azure ADF/Databricks.
- Strong expertise in Azure Data Factory, Databricks, and related technologies such as Azure Synapse Analytics and Azure Functions.
- Hands-on experience in designing and implementing data pipelines and workflows in Azure Data Factory and Databricks.
- Experience in Data Extraction, Transformation, and Loading of data from multiple data sources into target databases, using Azure Databricks, Spark SQL, Pyspark, and Azure SQL.
- Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks.
- Sound working experience in Cleansing, Transformation, Business Logic, Incremental Transformation of Data, and merging the data with datamart tables,
- Developing scalable and reusable frameworks for ingesting data sets.
- Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times.
- Interacting with stakeholders and leaders to understand business goals and data requirements.
- Experience in working with Agile (Scrum, Sprint) and Waterfall Methodologies.
- Collaborate with data engineers, data architects, and business analysts to understand and translate business requirements into technical designs.
- Provide technical guidance and support to junior team members.
- Design, develop, and maintain SQL databases, including creating database schemas, tables, views, stored procedures, and triggers.
- Self-starter and team player with excellent communication, problem-solving skills, and interpersonal skills, and a good aptitude for learning.
View all job openings
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in