Overview
- Azure Data ENgineer
- Pyspark, Databricks
- SQL
Location: Hyderabad(Remote)
Division: Banking & Cybersecurity Metrics (3+ Years Experience)
Role Summary
The Data Engineer will support the development and implementation of robust data pipelines to enable DQ rule execution, monitoring, and compliance reporting for cybersecurity metrics. The role demands hands-on experience with data engineering tools and platforms (Databricks, PySpark, SQL), with a focus on supporting BCBS 239-aligned initiatives under DUSE/DMOV frameworks.
Key Responsibilities
- Build, test, and deploy scalable DQ rule execution pipelines for cyber risk metrics.
- Design and automate ETL/ELT jobs to integrate BDEs, DDTs, and metadata from various data sources.
- Develop integration workflows to stitch technical data fields with BDEs for metric lineage and traceability.
- Support implementation of DQ rules (~400+) in Databricks; enable logging to Aurora platform.
- Ensure data availability, feed integrity, and SLA adherence in collaboration with ITSO and business teams.
- Optimize performance and scalability of rule execution and exception handling workflows.
- Collaborate with Data Architects, DQ Analysts, and Cyber SMEs to translate compliance requirements into engineering solutions.
- 3+ years of experience in data engineering within the banking or financial services sector.
- Strong proficiency in Databricks, PySpark, and SQL for large-scale data handling.
- Solid understanding of data quality, governance, and BCBS 239/DQ control compliance.
- Experience with ETL/ELT pipeline development, exception logging systems (e.g., Aurora), and distributed data processing.
- Familiarity with data risk metrics, DUSE/DMOV controls, and data stitching practices.
- Ability to work in Agile POD structure with cross-functional teams and stakeholders.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in