
Overview
Bengaluru, Karnataka, India
Qualification
Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices
Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns
Experience with code deployment and associated processes
Proven ability to write complex SQL queries to extract business-critical insights
Hands-on experience in streaming data processing
Familiarity with machine learning concepts is an added advantage
Experience with NoSQL databases
Good understanding of Test-Driven Development (TDD) methodologies
Demonstrated flexibility and eagerness to learn new technologies
Skills Required
Bigdata, Pyspark, Python
Role
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
- Design, build & maintain efficient, reusable & reliable code
- Test implementation, troubleshoot & correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Experience
5 to 8 years
Job Reference Number
13207