Bangalore, Karnataka, India
Information Technology
Full-Time
NCS Group
Overview
1 Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks
2 Ensure data pipelines run smoothly and efficiently
3 Developing scalable and re-usable frameworks for ingesting of data sets
4 Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
5 Working with event based / streaming technologies to ingest and process data
6 Working with other members of the project team to support delivery of additional project components (API interfaces, Search)
7 Evaluating the performance and applicability of multiple tools against customer requirements
8 Technically competent in Cloud and Databricks to provide technical advice to the team and to be involved in issue resolution
9 Provide on-call support and afterhours/weekend support as needed
10 Fulfill Service Requests related to Data Analytics platform
11 Lead and drive optimization and continuous improvement initiatives
12 Play gate-keeping role and conduct technical review of the changes as part of release management
13 Understand various data security standards and adhere to the required data security controls in the platform
14 Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform.
15 Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions.
16 Own architectural decisions and ensure adherence to data governance, security, and compliance requirements.
17 Mentor a team of data engineers, providing technical guidance and career development.
Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins.
2 Ensure data pipelines run smoothly and efficiently
3 Developing scalable and re-usable frameworks for ingesting of data sets
4 Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times
5 Working with event based / streaming technologies to ingest and process data
6 Working with other members of the project team to support delivery of additional project components (API interfaces, Search)
7 Evaluating the performance and applicability of multiple tools against customer requirements
8 Technically competent in Cloud and Databricks to provide technical advice to the team and to be involved in issue resolution
9 Provide on-call support and afterhours/weekend support as needed
10 Fulfill Service Requests related to Data Analytics platform
11 Lead and drive optimization and continuous improvement initiatives
12 Play gate-keeping role and conduct technical review of the changes as part of release management
13 Understand various data security standards and adhere to the required data security controls in the platform
14 Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform.
15 Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions.
16 Own architectural decisions and ensure adherence to data governance, security, and compliance requirements.
17 Mentor a team of data engineers, providing technical guidance and career development.
Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in