Overview
About Exponentia.aiExponentia.ai
is a fast-growing AI-first technology services company, partnering with enterprises to shape and accelerate their journey to AI maturity. With a presence across the US, UK, UAE, India, and Singapore, we bring together deep domain knowledge, cloud-scale engineering, and cutting-edge artificial intelligence to help our clients transform into agile, insight-driven organizations.
We are proud partners with global technology leaders such as Databricks, Microsoft, AWS, and Qlik, and have been consistently recognized for innovation, delivery excellence, and trusted advisories.
Awards & Recognitions
- Innovation Partner of the Year – Databricks, 2024
- Digital Impact Award, UK – 2024 (TMT Sector)
- Rising Star – APJ Databricks Partner Awards 2023
- Qlik’s Most Enabled Partner – APAC
With a team of 450+ AI engineers, data scientists, and consultants, we are on a mission to redefine how work is done, by combining human intelligence with AI agents to deliver exponential outcomes.
Learn more: www.exponentia.ai
About The Role
We are looking for a skilled Data Engineer with strong hands‑on experience in Azure Databricks, PySpark, and Azure Data Factory (ADF) to design, build, and optimize scalable data pipelines. The ideal candidate will work closely with data architects, analysts, and business stakeholders to deliver robust data solutions on Azure.
Key Responsibilities
- Design, develop, and maintain ETL/ELT pipelines using Azure Databricks and ADF.
- Write efficient and scalable PySpark code for data processing, transformation, and validation.
- Build and optimize Delta Lake tables, data models, and workflows inside Databricks.
- Integrate multiple data sources including APIs, databases, and cloud storage into data pipelines.
- Implement best practices for data quality, governance, and documentation.
- Monitor and troubleshoot performance issues in pipelines and clusters.
- Collaborate with analytics and business teams to understand data requirements and deliver solutions.
- Implement CI/CD processes for data pipeline deployment (preferred: Azure DevOps).
- Ensure security and compliance across data solutions following Azure standards.