Gurugram, Haryana, India
Information Technology
Full-Time
consultant
Overview
Key Responsibilities
- Design, develop, and maintain large-scale, high-performance big data applications.
- Work with Hadoop, Hive, and Spark (Scala) to process and analyze large datasets.
- Build efficient, reusable, and reliable data solutions to support business needs.
- Collaborate with cross-functional teams (data scientists, analysts, product teams) to define data requirements.
- Develop and optimize SQL queries (preferably PostgreSQL) for data analysis and reporting.
- Write unit and integration tests using Scalatest to ensure code quality and reliability.
- Manage code versioning and collaboration using Git.
- Contribute to CI/CD pipelines (experience with Jenkins is a plus).
- Ensure data security, compliance, and governance standards are met.
- 511 years of hands-on experience in big data engineering.
- Strong expertise in Hadoop, Hive, and Spark (Scala).
- Solid experience in RDBMS and at least one SQL database (PostgreSQL preferred).
- Experience in writing unit and integration tests using Scalatest.
- Proficiency in using Git for version control.
- Experience with CI/CD pipelines; Jenkins knowledge is an added advantage.
- Strong problem-solving and analytical skills with the ability to handle complex data challenges.
- Excellent communication and collaboration skills.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in