Thiruvananthapuram, Kerala, India
Information Technology
Full-Time
Accenture in India
Overview
Job Description
Job Title - < Senior Data Engineer – Python, Pyspark > + +
Management Level :
Location: Kochi, Coimbatore, Trivandrum
Must have skills: Python, Pyspark
Good to have skills: Redshift
Job Summary:
We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include:
Roles & Responsibilities
Experience: 5-8 years of experience is required
Educational Qualification: Graduation (Accurate educational details should capture)
Job Title - < Senior Data Engineer – Python, Pyspark > + +
Management Level :
Location: Kochi, Coimbatore, Trivandrum
Must have skills: Python, Pyspark
Good to have skills: Redshift
Job Summary:
We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include:
Roles & Responsibilities
- Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools.
- Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture)
- Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases.
- Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery.
- Optimize data pipelines for performance and cost efficiency.
- Implement and enforce best practices for data governance, access control, security, and compliance in the cloud.
- Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
- Lead and mentor junior engineers, fostering a culture of continuous learning and innovation.
- Excellent communication skills
- Ability to work independently and along with client based out of western Europe
- Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals
- Solving complex data problems to deliver insights that helps our business to achieve their goals.
- Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format.
- Creating data products for analytics team members to improve productivity
- Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline.
- Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions
- Preparing data to create a unified database and build tracking solutions ensuring data quality
- Create Production grade analytical assets deployed using the guiding principles of CI/CD.
- Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least)
- Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 3-4 years of hands-on experience working on these technologies.
- Experience in one of the many BI tools such as Tableau, Power BI, Looker.
- Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs.
- Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse.
- Experience working in cloud Data warehouses like Redshift or Synapse
- Certification in any one of the following or equivalent
- AWS- AWS certified data Analytics- Speciality
- Azure- Microsoft certified Azure Data Scientist Associate
- Snowflake- Snowpro core- Data Engineer
- Databricks Data Engineering
Experience: 5-8 years of experience is required
Educational Qualification: Graduation (Accurate educational details should capture)
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in