Chennai, Tamil Nadu, India
Information Technology
Full-Time
MyCareernet
Overview
Company: IT Services Organization
Key Skills: AWS, Spark, Kafka, Python, Data Engineer
Roles and Responsibilities:
- Design and implement Big Data solutions using Apache Spark and AWS.
- Develop and maintain stream-processing systems utilizing technologies such as Apache Storm or Spark-Streaming.
- Collaborate with cross-functional teams to integrate data from various sources including RDBMS, ERP systems, and files.
- Optimize Spark jobs for performance and efficiency.
- Utilize messaging systems like Kafka for data processing.
- Implement ETL techniques and frameworks to manage data workflows.
- Conduct performance tuning and troubleshooting of data pipelines.
- Lead a team of data engineers, ensuring adherence to Agile methodologies.
- Stay updated with the latest trends in Big Data technologies and best practices.
Skills Required:
- Expertise in Apache Spark for big data processing
- Proficiency in Kafka for real-time data streaming
- Strong experience with AWS cloud services
- Solid programming skills in Python
- Experience with ETL development and data integration
- Familiarity with Agile methodologies
- Good knowledge of distributed systems and performance tuning
- Team leadership and collaboration skills
Education: Bachelor's degree in Computer Science, Engineering, or a related field.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in