Hyderabad, Telangana, India
Information Technology
Full-Time
MyCareernet
Overview
Company: IT Services Organization
Key Skills: Pyspark, GCP, Apache Spark, Python, Data Engineer
Roles and Responsibilities:
- Develop and maintain scalable data processing systems using Apache Spark and Python.
- Design and implement Big Data solutions that integrate data from various sources including RDBMS, NoSQL databases, and cloud services.
- Lead a team of data engineers to ensure efficient project execution and adherence to best practices.
- Optimize Spark jobs for performance and scalability.
- Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs.
- Implement ETL processes and frameworks to facilitate data integration.
- Utilize cloud data services such as GCP for data storage and processing.
- Apply Agile methodologies to manage project timelines and deliverables.
- Stay updated with the latest trends and technologies in Big Data and cloud computing.
Skills Required:
- Proficiency in Pyspark and Apache Spark
- Strong knowledge of Python for data engineering tasks
- Hands-on experience with Google Cloud Platform (GCP)
- Experience designing and optimizing Big Data pipelines
- Leadership in data engineering team management
- Understanding of ETL frameworks and distributed computing
- Familiarity with cloud-based data services and Agile delivery
Education: Bachelor's degree in Computer Science, Information Technology, or a related field.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in