Overview
Job Specs : 7+ Years
CTC: 30 LPA
Location: Chennai, TN/ Hyderabad, TG
Work Mode: Hybrid
Shifts: 2 PM - 11 PM
Key Responsibilities:
Role Summary:
· This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink.
· The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions.
· This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment.
Essential Responsibilities
· Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink.
· Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka.
· Develop Flink applications for complex event processing, stream enrichment, and real-time analytics.
· Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering.
· Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline.
· Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations.
· Automate data pipeline deployment, monitoring, and maintenance tasks.
· Stay up-to-date with the latest advancements in data streaming technologies and best practices.
· Contribute to the development of data engineering standards and best practices within the organization.
· Participate in code reviews and contribute to a collaborative and supportive team environment.
· Work closely with other architects and tech leads in India & US and create POCs and MVPs
· Provide regular updates on the tasks, status and risks to project manager
· The experience we are looking to add to our team
Required
· Bachelor’s degree or higher from a reputed university
· 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc.
· Proficiency in developing Flink applications for stream processing and real-time analytics.
· Strong understanding of data streaming concepts and architectures.
· Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry.
· Hands-on experience with ksqlDB for real-time data transformations and stream processing.
· Experience with Kafka Connect and building custom connectors.
· Extensive experience in implementing large scale data ingestion and curation solutions
· Good hands on experience in big data technology stack with any cloud platform -
· Excellent problem-solving, analytical, and communication skills.
· Ability to work independently and as part of a team
Good to have
· Experience in Google Cloud
· Healthcare industry experience
· Experience in Agile
Job Type: Full-time
Pay: ₹2,800,000.00 - ₹3,000,000.00 per year
Benefits:
- Health insurance
- Provident Fund
Schedule:
- Day shift
Work Location: In person