Overview
Position – Data Engineer
Exp – 6-8 Years
Location - Hyderabad, INDIA
Budget – open Budget based on interview
Not Many Job Switches
Can be from a Reputed College like ( IIM or IIT)
Should & Must have SaaS product experience
Mongo DB – Mandatory
Good understanding of Database systems -- SQL and No SQL
Must have comprehensive experience in MongoDB or any other document DB
Responsibilities:
Design, build, and optimize data pipelines to ingest, process, transform, and load data from
various sources into our data platform
Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and
Jenkins to ensure reliable and timely data processing
Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for
efficient data retrieval and processing
Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases
(MongoDB,DocumentDB) to build scalable data solutions
Design and implement data warehouse solutions that support analytical needs and machine
learning applications
Collaborate with data scientists and ML engineers to prepare data for AI/ML models and
implement data-driven features
Implement data quality checks, monitoring, and alerting to ensure data accuracy and
reliability
Optimize query performance across various database systems through indexing,
partitioning,and query refactoring
Develop and maintain documentation for data models, pipelines, and processes
Collaborate with cross-functional teams to understand data requirements and deliver
solutions that meet business needs
Stay current with emerging technologies and best practices in data engineering
Requirements:
6+ years of experience in data engineering or related roles with a proven track record of
building data pipelines and infrastructure
Strong proficiency in SQL and experience with relational databases like MySQL and
PostgreSQL
Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB
Expertise in designing, implementing, and optimizing ETL processes using tools like
Kafka,Debezium, Airflow, or similar technologies
Experience with data warehousing concepts and technologies
Solid understanding of data modeling principles and best practices for both operational and
analytical systems
Proven ability to optimize database performance, including query optimization, indexing
strategies, and database tuning
Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack
Proficiency in at least one programming language (Python, Node.js, Java)
Experience with version control systems (Git) and CI/CD pipelines
Bachelor's degree in Computer Science, Engineering, or related field
Preferred Qualifications:
Experience with graph databases (Neo4j, Amazon Neptune)
Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures
Experience working with streaming data technologies and real-time data processing
Familiarity with data governance and data security best practices
Experience with containerization technologies (Docker, Kubernetes)
Understanding of financial back-office operations and FinTech domain
Experience working in a high-growth startup environment
Job Type: Permanent
Pay: ₹862,603.66 - ₹2,376,731.02 per year
Benefits:
- Health insurance
- Provident Fund
Supplemental Pay:
- Performance bonus
- Yearly bonus
Experience:
- ETL: 7 years (Preferred)
- HADOOP : 1 year (Preferred)
Work Location: In person
Application Deadline: 27/07/2025
Expected Start Date: 25/07/2025