Faridabad, Haryana, India
Information Technology
Full-Time
DHI Solutions
Overview
Description
Job Title : Senior Data Engineer
Location : Bangalore (Work From Office)
Experience : 6+ Years
About The Role
We are seeking an experienced Senior Big Data Engineer to join our team in Bangalore. This role requires hands-on experience in building, orchestrating, and deploying large-scale data pipelines using modern Big Data frameworks and containerized environments. The ideal candidate should have strong expertise in Scala, Spark, Kubernetes, and distributed computing concepts.
Key Responsibilities
Job Title : Senior Data Engineer
Location : Bangalore (Work From Office)
Experience : 6+ Years
About The Role
We are seeking an experienced Senior Big Data Engineer to join our team in Bangalore. This role requires hands-on experience in building, orchestrating, and deploying large-scale data pipelines using modern Big Data frameworks and containerized environments. The ideal candidate should have strong expertise in Scala, Spark, Kubernetes, and distributed computing concepts.
Key Responsibilities
- Design, develop, and optimize data pipelines and workflows for large-scale data processing.
- Implement and manage Big Data orchestration tools such as Airflow, Spark on Kubernetes, YARN, and Oozie.
- Work extensively with Hadoop, Kafka, Spark, and Spark Structured Streaming to process real-time and batch data.
- Ensure adherence to SOLID and DRY principles, delivering maintainable and scalable software architecture.
- Develop high-quality code using Scala (Functional Programming paradigm) with case classes and advanced data structures.
- Create and maintain automated frameworks for unit and integration testing to ensure system reliability.
- Deploy and manage Spark workloads on Kubernetes using Docker and Helm.
- Collaborate with cross-functional teams to ensure smooth data flow, performance tuning, and best engineering practices.
- Strong hands-on experience with Big Data orchestration : Airflow, Spark on Kubernetes, YARN, Oozie.
- Expertise in Big Data processing systems : Hadoop, Kafka, Spark, Spark Structured Streaming.
- Solid understanding and practical application of Software Architecture principles (SOLID, DRY).
- Advanced proficiency in Scala, particularly Functional Programming, case classes, and complex data structures.
- Proven experience building automated testing frameworks (unit & integration testing).
- Strong experience with Kubernetes, Docker, Helm, and cloud-native deployment workflows.
- Experience deploying and optimizing Spark applications on Kubernetes clusters.
- Experience with CI/CD pipelines and monitoring tools.
- Exposure to cloud platforms (AWS / GCP / Azure).
- Ability to troubleshoot distributed systems and performance bottlenecks.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in