Overview
About MetaybMetayb is a fast-growing digital transformation company with 450+ professionals, helping global enterprises navigate the digital-first era. Our expertise spans Digital Transformation, Cloud & Infrastructure, Data Science, SAP, Workflow Automation, Dashboarding & Visualization, Finance, and Supply Chain solutions. We work with prestigious clients including Tolaram, Kellogg Tolaram, Colgate Tolaram, Arla, Lucky Fibres, Dufil, Multipro, BHN, and more. As we continue to expand into IoT, AI/ML, and Virtual Reality, we are looking for passionate professionals to join our journey.
Role Overview
We are seeking a highly technical Lead Data Engineer with deep hands-on expertise in Microsoft Fabric, SQL, DAX, and Power BI to architect and deliver scalable, enterprise-grade data solutions.
This role requires strong architectural thinking, advanced performance optimization skills, and ownership of end-to-end data platform implementation. The candidate will primarily drive technical excellence while also mentoring a small team of engineers.
Key Responsibilities
🔹 Data Platform Architecture
- Architect and implement enterprise-scale solutions using Microsoft Fabric (Lakehouse, Warehouse, Data Engineering, Data Pipelines)
- Design and implement Medallion Architecture (Bronze/Silver/Gold layers)
- Build scalable ELT/ETL frameworks for batch and near real-time data processing
- Optimize compute, storage, and query performance across Fabric workloads
🔹 Advanced SQL & Data Modeling
- Develop complex SQL queries, stored procedures, views, and functions
- Perform advanced query tuning, indexing strategies, and partitioning
- Design robust dimensional data models (Star & Snowflake schemas)
- Implement incremental loads, CDC strategies, and large-volume data handling
🔹 Power BI & DAX Engineering
- Build enterprise semantic models and optimized datasets
- Write complex DAX measures, calculated columns, and performance-optimized expressions
- Implement Row-Level Security (RLS) and data access controls
- Optimize large Power BI models for performance and scalability
- Develop executive dashboards with strong data storytelling principles
🔹 Cloud & Modern Data Engineering Practices
- Strong experience with Azure Data services (Data Factory, Synapse, ADLS, Azure SQL)
- Experience with Python / PySpark for transformations
- Exposure to Spark-based processing within Fabric
- CI/CD implementation for data pipelines and Power BI deployments
- Version control using Git-based repositories
- Experience integrating APIs and third-party systems
- Knowledge of data governance, lineage, and security best practices
- Handling large-scale datasets (GB–TB scale environments)
- 8+ years of hands-on experience in Data Engineering / BI / Analytics
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- Deep expertise in Microsoft Fabric ecosystem
- Advanced SQL development and performance optimization
- Strong DAX expertise with complex calculations
- Strong understanding of data warehousing concepts and modeling
- Experience in enterprise-level Power BI implementations
- Strong knowledge of Azure data ecosystem
- Experience with Spark, Python, or PySpark
- CI/CD and DevOps practices for data solutions
- Strong understanding of data security and governance
- Mentor team members on best practices and performance tuning
- Drive design discussions and ensure scalable solution delivery
Disclaimer: The job title mentioned in this description is generic and intended for broad categorization purposes. The final designation will be determined based on the candidate’s performance during the interview process, relevant experience, and alignment with the organizational hierarchy.