Hyderabad, Telangana, India
Information Technology
Full-Time
Whitetable.ai
Overview
- About the Role :
This is a critical role requiring technical expertise in building scalable data pipelines, ensuring data quality, and supporting data analytics and reporting infrastructure for business growth.
- Key Responsibilities :
- Design, develop, and maintain efficient ETL processes for handling multi-scale datasets.
- Implement and optimize data transformation and validation processes to ensure data accuracy and consistency.
- Collaborate with cross-functional teams to gather data requirements and translate business logic into ETL workflows.
- Architect, build, and maintain scalable and high-performance data pipelines to enable seamless data flow.
- Evaluate and implement modern technologies to enhance the efficiency and reliability of data pipelines.
- Build pipelines for extracting data via web scraping to source sector-specific datasets on an ad hoc basis.
- Design and implement data models to support analytics and reporting needs across teams.
- Optimize database structures to enhance performance and scalability.
- Develop and implement data quality checks and governance processes to ensure data integrity.
- Collaborate with stakeholders to define and enforce data quality standards across the organization.
- Maintain detailed documentation of ETL processes, data models, and other key workflows.
- Effectively communicate complex technical concepts to non-technical stakeholders and business users.
- Work closely with the Quant team and developers to design and optimize data pipelines.
- Collaborate with external stakeholders to understand business requirements and translate them into technical solutions.
- Essential Requirements :
- Bachelors degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer with expertise in ETL techniques (minimum 2 years).
- 36 years of strong programming experience in languages such as Python, Java, or Scala.
- Hands-on experience in web scraping to extract and transform data from publicly available web sources.
- Proficiency with cloud-based data platforms such as AWS, Azure, or GCP.
- Strong knowledge of SQL and experience with relational and non-relational databases.
- Deep understanding of data warehousing concepts and architectures.
- Familiarity with big data technologies like Hadoop, Spark, and Kafka.
- Experience with data modeling tools and techniques.
- Excellent problem-solving, analytical, and communication skills.
- Masters degree in Computer Science or Data Science.
- Knowledge of data streaming and real-time processing frameworks.
- Familiarity with data governance and security best practices.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in