Ahmedabad, Gujarat, India
Information Technology
Full-Time
Blend
Overview
Company Description
Blend360 is a data and AI services company specializing in data engineering, data science, MLOps, and governance to build scalable analytics solutions. It partners with enterprise and Fortune 1000 clients across industries including financial services, healthcare, retail, technology, and hospitality to drive data-driven decision making. Headquartered in Columbia, Maryland, the company is recognized for rapid growth and global delivery of AI solutions through the integration of people, data, and technology.
We are seeking a hands-on Data Engineer with deep expertise in distributed systems, ETL/ELT development, and enterprise-grade database management. The engineer will design, implement, and optimize ingestion, transformation, and storage workflows to support the MMO platform. The role requires technical fluency across big data frameworks (HDFS, Hive, PySpark), orchestration platforms (NiFi), and relational systems (Postgres), combined with strong coding skills in Python and SQL for automation, custom transformations, and operational reliability.
Job Description
We are implementing a Media Mix Optimization (MMO) platform designed to analyze and optimize marketing investments across multiple channels. This initiative requires a robust on-premises data infrastructure to support distributed computing, large-scale data ingestion, and advanced analytics. The Data Engineer will be responsible for building and maintaining resilient pipelines and data systems that feed into MMO models, ensuring data quality, governance, and availability for Data Science and BI teams. The environment integrates HDFS for distributed storage, Apache NiFi for orchestration, Hive and PySpark for distributed processing, and Postgres for structured data management.
This role is central to enabling seamless integration of massive datasets from disparate sources (media, campaign, transaction, customer interaction, etc.), standardizing data, and providing reliable foundations for advanced econometric modeling and insights.
Responsibilities
Data Pipeline Development & Orchestration
Ensure pipelines meet low-latency and high-throughput requirements for distributed processing.
Data Storage & Processing
petabyte-level datasets.
Database Engineering & Management
Collaboration & Integration
Monitoring & Reliability Engineering
Data Governance & Compliance
Documentation & Knowledge Transfer
Qualifications
Blend360 is a data and AI services company specializing in data engineering, data science, MLOps, and governance to build scalable analytics solutions. It partners with enterprise and Fortune 1000 clients across industries including financial services, healthcare, retail, technology, and hospitality to drive data-driven decision making. Headquartered in Columbia, Maryland, the company is recognized for rapid growth and global delivery of AI solutions through the integration of people, data, and technology.
We are seeking a hands-on Data Engineer with deep expertise in distributed systems, ETL/ELT development, and enterprise-grade database management. The engineer will design, implement, and optimize ingestion, transformation, and storage workflows to support the MMO platform. The role requires technical fluency across big data frameworks (HDFS, Hive, PySpark), orchestration platforms (NiFi), and relational systems (Postgres), combined with strong coding skills in Python and SQL for automation, custom transformations, and operational reliability.
Job Description
We are implementing a Media Mix Optimization (MMO) platform designed to analyze and optimize marketing investments across multiple channels. This initiative requires a robust on-premises data infrastructure to support distributed computing, large-scale data ingestion, and advanced analytics. The Data Engineer will be responsible for building and maintaining resilient pipelines and data systems that feed into MMO models, ensuring data quality, governance, and availability for Data Science and BI teams. The environment integrates HDFS for distributed storage, Apache NiFi for orchestration, Hive and PySpark for distributed processing, and Postgres for structured data management.
This role is central to enabling seamless integration of massive datasets from disparate sources (media, campaign, transaction, customer interaction, etc.), standardizing data, and providing reliable foundations for advanced econometric modeling and insights.
Responsibilities
Data Pipeline Development & Orchestration
- Design, build, and optimize scalable data pipelines in Apache NiFi to
Ensure pipelines meet low-latency and high-throughput requirements for distributed processing.
Data Storage & Processing
- Architect and manage datasets on HDFS to support high-volume,
- Develop distributed processing workflows in PySpark and Hive to
petabyte-level datasets.
- Implement partitioning, bucketing, and indexing strategies to
Database Engineering & Management
- Maintain and tune Postgres databases for high availability, integrity,
- Write advanced SQL queries for ETL, analysis, and integration with
Collaboration & Integration
- Partner with Data Scientists to deliver clean, reliable datasets for
- Work with BI engineers to ensure data pipelines align with reporting
Monitoring & Reliability Engineering
- Implement monitoring, logging, and alerting frameworks to track
- Troubleshoot and resolve issues in ingestion, transformations, and
Data Governance & Compliance
- Enforce standards for data quality, lineage, and security across
- Ensure compliance with internal governance and external
Documentation & Knowledge Transfer
- Develop and maintain comprehensive technical documentation for
- Provide knowledge sharing and onboarding support for cross-
Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or related field (Master’s preferred).
- Proven experience as a Data Engineer with expertise in HDFS, Apache NiFi, Hive, PySpark, Postgres, Python, and SQL.
- Strong background in ETL/ELT design, distributed processing, and relational database management.
- Experience with on-premises big data ecosystems supporting distributed computing.
- Solid debugging, optimization, and performance tuning skills.
- Ability to work in agile environments, collaborating with multi-disciplinary
- Strong communication skills for cross-functional technical discussions.
- Familiarity with data governance frameworks, lineage tracking, and data cataloging tools.
- Knowledge of security standards, encryption, and access control in on- premises environments.
- Prior experience with Media Mix Modeling (MMM/MMO) or marketing analytics projects.
- Exposure to workflow schedulers (Airflow, Oozie, or similar).
- Proficiency in developing automation scripts and frameworks in Python for
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in