Chennai, Tamil Nadu, India
Information Technology
Full-Time
Angel and Genie
Overview
We are seeking an experienced Data Engineer to join our data platform team. You will be responsible for building, optimizing, and maintaining scalable data pipelines and architectures to support advanced analytics, machine learning, and business intelligence initiatives.
This is a key role in enabling data-driven decision-making across the organization.
Key Responsibilities
This is a key role in enabling data-driven decision-making across the organization.
Key Responsibilities
- Design, build, and maintain robust ETL/ELT pipelines for structured and unstructured data
- Develop and optimize data models, warehouses, and lakes to support business and analytics use cases
- Integrate data from multiple sources including APIs, databases, flat files, and real-time streaming platforms
- Implement data quality checks, validation rules, and monitoring for pipeline reliability
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver clean, accessible datasets
- Tune performance of queries and pipelines, ensuring minimal latency and cost efficiency
- Work with cloud platforms (e.g., AWS, Azure, GCP) for storage, compute, and orchestration
- Automate workflows using tools like Airflow, Luigi, or native cloud schedulers
- Ensure data governance, compliance, and security best practices
- Write and maintain clear technical documentation
- 5 years of hands-on experience in data engineering, ETL development, or data integration
- Strong programming skills in Python, SQL, or Scala
- Experience with big data tools (e.g., Spark, Hadoop, Hive, Kafka)
- Solid understanding of relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB, Cassandra)
- Proficient in building pipelines using data orchestration tools (Airflow, Prefect, etc.)
- Familiarity with cloud services (AWS S3, Glue, Redshift; GCP BigQuery; Azure Data Factory, Synapse)
- Good understanding of data warehousing concepts and dimensional modeling
- Experience in handling large-scale data systems and performance tuning
- Knowledge of CI/CD pipelines, version control (Git), and DevOps practices
- Experience with real-time data processing (Kafka Streams, Flink, etc.)
- Familiarity with DBT (Data Build Tool) for transformation pipelines
- Exposure to DataOps, MLOps, or analytics platforms like Looker, Tableau, Power BI
- Understanding of data privacy regulations (GDPR, HIPAA, etc.)
- Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in