Gurugram, Haryana, India
Information Technology
Full-Time
myGwork LGBTQ Business Community
Overview
This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly.
Description
Amazon's Profit Intelligence is seeking a talented Big Data Engineer to join the Profit Intelligence team. We develop software solutions that are revolutionizing Amazon business intelligence through advanced algorithms running on big data technologies. The ideal candidate thrives in a fast-paced environment and relishes working with petabytes of extremely complex and dynamic data. In this role you will be part of a team of high caliber data and software engineers to build data pipelines using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. You should be passionate about working with big data and have the aptitude to incorporate new technologies and evaluate them critically. You must possess excellent business and communication skills and be able to work with business owners to analyze requirements and build solutions. You are a self-starter, has a proven track record of dealing with ambiguity and working in a fast-paced, highly dynamic environment. Working experience of any one of the programming languages such as Java, C#, C++, Scala, etc. is a big plus.
Key job responsibilities
Description
Amazon's Profit Intelligence is seeking a talented Big Data Engineer to join the Profit Intelligence team. We develop software solutions that are revolutionizing Amazon business intelligence through advanced algorithms running on big data technologies. The ideal candidate thrives in a fast-paced environment and relishes working with petabytes of extremely complex and dynamic data. In this role you will be part of a team of high caliber data and software engineers to build data pipelines using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. You should be passionate about working with big data and have the aptitude to incorporate new technologies and evaluate them critically. You must possess excellent business and communication skills and be able to work with business owners to analyze requirements and build solutions. You are a self-starter, has a proven track record of dealing with ambiguity and working in a fast-paced, highly dynamic environment. Working experience of any one of the programming languages such as Java, C#, C++, Scala, etc. is a big plus.
Key job responsibilities
- Interface with PMs, business customers, and software developers to understand requirements and implement solutions
- Collaborate with both Retail Finance and central FP&A teams to understand the interdependence and deliverables
- Design, develop, and operate highly-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms with AWS technologies
- Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation
- Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture
- 5+ years of data engineering experience
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with SQL
- Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS
- Experience mentoring team members on best practices
- Bachelor's degree in computer science, engineering, analytics, mathematics, statistics, IT or equivalent
- 8+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
- 8+ years of experience in designing and developing data processing pipelines using big data technologies (Hadoop, Hive, Hbase, Spark, EMR, etc.)
- 8+ years of experience in designing and developing analytical systems
- Experience building large-scale applications and services with big data technologies
- Experience providing technical leadership and mentoring other engineers for best practices on data engineering
- Expertise in SQL, DB and storage Internals, SQL tuning, and ETL development
- Strong organizational and multitasking skills with ability to balance competing priorities
- Working knowledge of scripting languages such as Python, Perl, etc.
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience operating large data warehouses
- Master's degree
- Experience communicating with users, other technical teams, and management to collect requirements, describe data modeling decisions and data engineering strategy
- Experience working with others to improve skills, making everyone more effective software engineers.
- Exposure to Big Data technologies and techniques.
- Working knowledge of a Object oriented language
- Experience with Amazon Redshift or other distributed computing technology.
- Experience in building complex software systems that have been successfully delivered to customers.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in