Overview
About the job:
Exp- 6+ Years
Location- Bangalore (Hybrid)
Mandatory skills- Big data, Spark, Scala, Python/Pyspark
Job Description-
6+ years of experience on application development with Spark & Scala
Good hands-on experience of working on the Hadoop Eco-system ( HDFS, Hive, Spark )
Good understanding of the Hadoop File Formats
Good Expertise on Hive / HDFS, PySpark, Spark, Jupiter Notebook, ELT Talend, Control-M, Unix/Script, Python, CI/CD, Git / Jira, Hadoop, TOM, Oozie, Snowflake
Expertise in the implementation of the Data Quality Controls
Ability to interpret the Spark UI and identify the bottlenecks in the Spark process and provide the optimal solution.
Agile -
Good to have exposure to CI/CD processes
Exposure to Agile methodology and processes
Others
Ability to understand complex business rules and translate into technical specifications/design.
Write highly efficient and optimized code which is easily scalable.
Adherence to coding, quality and security standards.
Effective verbal and written communication to work closely with all the stakeholders
Should be able to convince the stakeholders on the proposed solutions
MANDATORY
Spark Scala- Proficiency in coding and transforming raw requirements into technical solution.
Hadoop- Experience with HDFS infrastructure and platform. Proficiency in the HDFS commands
SQL- Strong skills in data analysis, including querying and data retrieval.
GitHub- Expertise in code management throughout the entire development lifecycle.
Who can apply:
- have minimum 6 years of experience
- are Computer Science Engineering students
Only those candidates can apply who:
Salary:
₹ 18,00,000 - 23,00,000 /year
Experience:
6 year(s)
Deadline:
2025-11-22 23:59:59