Back to Jobs
3 Days ago
IN-Senior Associate_Data Engineer-Databricks-- Data and Analytics_Advisory_Pan India
Chennai, Tamil Nadu, India
Information Technology
Full-Time
PwC India
Overview
Line of Service
Advisory
Industry/Sector
Not Applicable
Specialism
Data, Analytics & AI
Management Level
Senior Associate
Job Description & Summary
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth.
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.
Job Description & Summary: A career within PWC
Responsibilities
Experience:3-8 years in Data Engineering Job Summary We are looking for a skilled and motivated Data Engineer with strong hands-on experience in Databricks, PySpark, and cloud-based data platforms. The ideal candidate will be responsible for designing and developing scalable data pipelines and solutions that support advanced analytics and business intelligence initiatives. This role requires a deep understanding of cloud data lakes, data warehousing, and modern data engineering practices. --- Key Responsibilities
Skill Sets
PySpark, Databricks
Preferred
skill sets:
PySpark, Databricks
Years of
Experience
required:
3-8 years
Education qualification: BE/BTECH, ME/MTECH, MBA, MCA
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration
Degrees/Field Of Study Preferred
Certifications (if blank, certifications not specified)
Required Skills
PySpark
Optional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more}
Desired Languages (If blank, desired languages not specified)
Travel Requirements
Not Specified
Available for Work Visa Sponsorship?
No
Government Clearance Required?
No
Job Posting End Date
Advisory
Industry/Sector
Not Applicable
Specialism
Data, Analytics & AI
Management Level
Senior Associate
Job Description & Summary
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth.
In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions.
- Why PWC
Job Description & Summary: A career within PWC
Responsibilities
Experience:3-8 years in Data Engineering Job Summary We are looking for a skilled and motivated Data Engineer with strong hands-on experience in Databricks, PySpark, and cloud-based data platforms. The ideal candidate will be responsible for designing and developing scalable data pipelines and solutions that support advanced analytics and business intelligence initiatives. This role requires a deep understanding of cloud data lakes, data warehousing, and modern data engineering practices. --- Key Responsibilities
- Design and develop data pipelines using Databricks and PySpark to ingest, process, and transform large volumes of data.
- Implement ETL/ELT workflows to move data from source systems to Data Warehouses, Data Lakes, and Lake Houses using cloud-native tools.
- Work with structured and unstructured data stored in AWS or Azure Data Lakes.
- Apply strong SQL and Python skills to manipulate and analyze data efficiently.
- Collaborate with cross-functional teams to deliver cloud-based serverless data solutions.
- Design innovative data solutions that address complex business requirements and support data-driven decision-making.
- Maintain documentation and enforce best practices for data architecture, governance, and performance optimization.
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 4+ years of hands-on project experience in Databricks, PySpark, and SQL on any cloud platform (AWS or Azure).
- Strong programming skills in Python.
- In-depth knowledge of Data Warehousing, Database technologies, and Big Data ecosystem.
- Proven experience working with cloud-based data lakes (Azure Data Lake Storage Gen2 or AWS S3).
- Familiarity with DevOps practices including CI/CD, Infrastructure as Code (IaC), and automation is a plus. Development experience Design and Develop Data Pipelines: Create pipelines to ingest, process, and transform data, deliver Cloud-based Serverless solutions to the customers. Design Data Solutions: Leverage your analytical skills to design innovative data solutions that address complex business requirements and drive decision-making.
Skill Sets
PySpark, Databricks
Preferred
skill sets:
PySpark, Databricks
Years of
Experience
required:
3-8 years
Education qualification: BE/BTECH, ME/MTECH, MBA, MCA
Education (if blank, degree and/or field of study not specified)
Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology, Master of Business Administration
Degrees/Field Of Study Preferred
Certifications (if blank, certifications not specified)
Required Skills
PySpark
Optional Skills
Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more}
Desired Languages (If blank, desired languages not specified)
Travel Requirements
Not Specified
Available for Work Visa Sponsorship?
No
Government Clearance Required?
No
Job Posting End Date
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in