Overview
JDOverview We are looking for a Data Engineer to execute the implementation of data pipelines and transformations. You will work closely with Senior Engineers to migrate logic, build data models, and ensure data availability in the new cloud environment.
This role requires a proactive communicator capable of raising flags during daily standups if dependencies are missing and communicating effectively offline with team members in various timezones. The ideal candidate works effectively in a pair-programming environment and possesses strong time management skills to balance multiple migration tasks to meet sprint goals.
Required Credentials
3 - 5 Years of Experience
Required Qualifications
GCP Data Stack: Expertise in BigQuery, Cloud Storage, Cloud SQL, Cloud Composer, and DataPlex.
Languages: Proficiency in SQL (Intermediate/Advanced) and Python.
Tools: Experience with Git, GitHub, and Jira.
Frameworks: Familiarity with the Hadoop Ecosystem (Intermediate).
Useful Qualifications
Prior experience in pair-programming environments.
Experience working in an agile environment with distributed teams.
Contractor Scope of Work Delivery Expectations: Below covers the scope of work we anticipate the contractor supporting throughout the project timeline.
Scope And Solution Expectations
Backlog Execution: Implementation and execution of the data-focused backlog (User Stories).
Pipeline Implementation: Write and test code for assigned user stories, bug fixes, and features.
Legacy Migration: Assist in the migration of hundreds of legacy ETL jobs to the new platform.
Collaboration: Offer task-level guidance to the less experienced members of the team.