Hyderabad, Telangana, India
Information Technology
Full-Time
Nineleaps
Overview
Responsibilities
- Address and resolve data issues and alerts for maintained tables.
- Monitor database health and resource utilization to prevent outages, including NS Quota and Disk Quota.
- Develop new data flow or ETL pipelines to access data from diverse sources.
- Collaborate with stakeholders to understand business and product requirements, and adapt or build systems accordingly.
- Implement best practices and standards for data modeling to ensure the consistency and maintainability of data structures.
- Skill Set: Strong SQL Experience, Spark, PySpark, and Python.
- Good To Have: Hive and Hadoop.
- Experience in Big Data distributed ecosystems: Hadoop, Hive.
- Excellent knowledge of HQL/PrestoQL: Optimisations, complex aggregations, performance tuning.
- Experience building data processing frameworks and big data pipelines.
- Solid understanding of DWH architecture, ELT/ETL processes, and data structures.
- Basic Understanding of Python: ETL.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in