Overview
Job Title: Senior Big Data Engineer
Experience: 6+ Years
Employment Type: Full-Time
Location : Bangalore
Preferred : 15 days to immediate joiners
Role Overview
We are looking for an experienced Senior Big Data Engineer to design, build, and maintain scalable big data platforms. The ideal candidate will have strong hands-on expertise in Apache Spark, Scala, and distributed data processing systems, along with experience in both batch and real-time data pipelines. You will work closely with cross-functional teams to deliver high-performance, reliable, and optimized data solutions.
Key Responsibilities
Design, develop, and maintain large-scale Big Data platforms using Apache Spark and Scala.
Build and manage ETL/ELT pipelines to process structured and semi-structured data efficiently.
Implement real-time data processing solutions using Spark Streaming and Kafka.
Apply distributed computing principles to ensure scalability, fault tolerance, and high availability.
Optimize data processing workflows at both system and algorithm levels for performance and cost efficiency.
Develop and maintain unit test cases, perform debugging, and support production issue resolution.
Work with workflow orchestration tools such as Apache Airflow or Oozie for scheduling and monitoring jobs.
Collaborate with product, analytics, and engineering teams in an Agile/Scrum environment.
Use version control systems (Git) and follow standard SDLC and CI/CD practices.
Participate in design discussions, code reviews, and technical troubleshooting.
Required Skills & Qualifications
6+ years of hands-on experience in Big Data engineering using Spark and Scala.
Strong understanding of distributed systems and Big Data technologies such as Hadoop, Spark, and Streaming frameworks.
Experience with ETL processes, data modeling, and performance tuning.
Hands-on experience with Spark Streaming and Kafka.
Working knowledge of Apache Airflow or Oozie.
Proficiency in unit testing frameworks and shell scripting.
Solid problem-solving and analytical skills.
Experience working in Agile/Scrum development environments.
Ability to work independently as well as in a team-oriented, fast-paced environment.
Good to Have
Experience with cloud-based Big Data platforms (AWS, Azure, or GCP).
Exposure to CI/CD pipelines and DevOps practices.
Experience in handling high-volume, low-latency data processing systems.
#Hiring #SeniorBigDataEngineer #BigDataJobs #DataEngineering #ApacheSpark #ScalaJobs #KafkaJobs #SparkStreaming #Airflow #HadoopEcosystem #AgileDevelopment #TechCareers #ITJobs #bangalorejobs #hybrid #immediate joiner