Hyderabad, Telangana, India
Information Technology
Full-Time
Accenture in India
Overview
Project Role : Data Platform Architect
Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 5 Year(s) Of Experience Is Required
Educational Qualification : 15 years full time education
Summary: We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities: - Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure. - Implement Lakehouse architecture for structured data ingestion, processing, and storage. - Build and manage Delta Lake tables and perform schema evolution and versioning. - Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation. - Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions. - Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues. - Contribute to best practices around coding, testing, and data engineering workflows. - Document technical solutions and maintain code repositories. - Expected to be an SME. - Develop and maintain documentation related to the data platform architecture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong hands-on experience with PySpark and advanced SQL for large-scale data processing. - Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake. - Solid experience with Azure cloud services: ADLS Gen2, ADF, Azure Synapse, Key Vault, etc. - Knowledge of Lakehouse architecture concepts, implementation, and governance. - Experience in version control tools like Git and CI/CD pipelines. - Excellent problem-solving and debugging skills. - Strong communication and collaboration abilities across cross-functional teams. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance requirements. - Ability to design scalable and efficient data pipelines. Good to Have:
Project Role Description : Architects the data platform blueprint and implements the design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models.
Must have skills : Databricks Unified Data Analytics Platform
Good to have skills : NA
Minimum 5 Year(s) Of Experience Is Required
Educational Qualification : 15 years full time education
Summary: We are looking for a highly skilled Senior Databricks Developer with extensive experience in building and managing modern data platforms on Azure using Lakehouse architecture. The ideal candidate will have strong hands-on experience in PySpark, SQL, and Azure Data Services, and a proven track record in developing scalable and efficient data pipelines. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure seamless integration between systems and data models, while also addressing any challenges that arise during the implementation process. You will engage in discussions with stakeholders to gather requirements and provide insights that drive the overall architecture of the data platform, ensuring it meets the needs of the organization effectively. Roles & Responsibilities: - Design, build, and optimize scalable data pipelines using Databricks, PySpark, and SQL on Azure. - Implement Lakehouse architecture for structured data ingestion, processing, and storage. - Build and manage Delta Lake tables and perform schema evolution and versioning. - Work with Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for data integration and transformation. - Collaborate with data architects, analysts, and business teams to understand requirements and design efficient data solutions. - Optimize performance of large-scale data pipelines and troubleshoot data quality or latency issues. - Contribute to best practices around coding, testing, and data engineering workflows. - Document technical solutions and maintain code repositories. - Expected to be an SME. - Develop and maintain documentation related to the data platform architecture. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong hands-on experience with PySpark and advanced SQL for large-scale data processing. - Deep expertise in Databricks platform including notebooks, jobs, and Delta Lake. - Solid experience with Azure cloud services: ADLS Gen2, ADF, Azure Synapse, Key Vault, etc. - Knowledge of Lakehouse architecture concepts, implementation, and governance. - Experience in version control tools like Git and CI/CD pipelines. - Excellent problem-solving and debugging skills. - Strong communication and collaboration abilities across cross-functional teams. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance requirements. - Ability to design scalable and efficient data pipelines. Good to Have:
- Experience in data quality checks and validation frameworks.
- Exposure to DevOps and Infrastructure as Code (IaC) in Azure environments.
- Familiarity with data governance tools like Unity Catalog or Azure Purview.
- Knowledge of Delta Live Tables (DLT) is a plus. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in