Hyderabad, Telangana, India
Information Technology
Full-Time
MyCareernet
Overview
Company:A Large Global Organization
Key Skills: AWS Cloud, Bigdata, Python, Data Engineer, Data Modelling, Design Patterns
Roles and Responsibilities:
- Utilize in-depth knowledge and hands-on experience using Python (PySpark) on real-time data streaming technologies such as Kafka, Spark, and Lambda.
- Build data processing pipelines on AWS using AWS MSK, EMR (Spark Streaming), DynamoDB, Lambda, Glue, and Athena.
- Manage device data ingestion and processing using AWS IoT Core, IoT rules, and EventBridge.
- Design, implement, and optimize Kafka and Spark-based NRT data processing pipelines.
- Develop reusable, cloud-native, scalable, and reliable frameworks and tools.
- Create and implement cost-effective solutions to meet functional and non-functional requirements like availability, latency, and fault tolerance.
- Design scalable data pipelines for IoT telemetry and establish real-time vs batch processing architecture.
- Ensure data governance, security, and compliance while applying cost optimization strategies on AWS.
Skills Required:
- Strong expertise in AWS Cloud services (e.g., MSK, EMR, Glue, Lambda, DynamoDB)
- Proficiency in Python/PySpark for data engineering
- Experience in Big Data technologies and real-time data streaming (Kafka, Spark)
- Solid background in Data Modelling and IoT telemetry data processing
- Familiarity with Design Patterns and cloud-native architecture principles
- Knowledge of data governance, security, and cost optimization on AWS
Education: B.Tech in Software Technology
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in