Overview
Role - Data Engineer
Location - Hyderabad, INDIA [Hybrid]
Responsibilities:
● Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform
● Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing
● Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing
● Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions
● Design and implement data warehouse solutions that support analytical needs and machine learning applications
● Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features
● Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability
● Optimize query performance across various database systems through indexing, partitioning, and query refactoring
● Develop and maintain documentation for data models, pipelines, and processes
● Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
● Stay current with emerging technologies and best practices in data engineering
Requirements:
● 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure work exp on enterprise SAAS is mandatory.
● Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL
● Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB
● Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies
● Experience with data warehousing concepts and technologies
● Solid understanding of data modeling principles and best practices for both operational and analytical systems
● Proven ability to optimize database performance, including query optimization, indexing
strategies, and database tuning
● Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack
● Proficiency in at least one programming language (Python, Node.js, Java)
● Experience with version control systems (Git) and CI/CD pipelines
● Bachelor's degree in Computer Science, Engineering, or related field
Job Description
Preferred Qualifications:
● Experience with graph databases (Neo4j, Amazon Neptune)
● Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures
● Experience working with streaming data technologies and real-time data processing
● Familiarity with data governance and data security best practices
● Experience with containerization technologies (Docker, Kubernetes)
● Understanding of financial back-office operations and FinTech domain
● Experience working in a high-growth startup environment
● Master's degree in Computer Science, Data Engineering, or related field
Job Types: Full-time, Permanent
Pay: ₹1,500,000.00 - ₹2,500,000.00 per year
Schedule:
- Day shift
- Monday to Friday
Work Location: In person