Bangalore, Karnataka, India
Information Technology
Full-Time
MyCareernet
Overview
Company: A Large Global Organization
Key Skills: Pyspark, AWS, Python, CICD, SQL
Roles and Responsibilities:
- Assembling large, complex sets of data that meet non-functional and functional business requirements.
- Identifying, designing and implementing internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes
- Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using AWS and SQL technologies
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition.
- Working with stakeholders including data, design, product and executive teams and assisting them with data-related technical issues
- Working with stakeholders including the Executive, Product, Data and Design teams to support their data infrastructure needs while assisting with data-related technical issues.
Skills Required:
- Must have 3 or more years of experience in Data Engineering, ETL Development
- ETL DW Concepts - AWS (S3, Glue, Lamda, Athena, Step Function)\
- Build data pipeline using Python, PySpark, SparkSQL
- Must have good understanding of MongoDB, DynamoDB, S3
- Deliver business solution on Company's analytics platform through end-to-end implementation that includes data security, governance, cataloging, preparation, automated testing, and data quality metrics.
- Automate, optimize, migrate and enhance existing solutions.
- Adopt AWS data lake and data related services to implement end-to-end solution
- Perform data modeling, data analysis and providing insights using various tools.
Education: Bachelor's Degree in related field
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in