
Overview
Sr. Data Engineer
Overview
Crocs is seeking an experienced Sr Data Engineer to design, implement, and maintain scalable data pipelines, decoupled data infrastructure, dynamic transformations and comprehensive orchestration layer using a combination of Snowflake, PySpark, Airflow, Azure Data Lake, Azure Functions, GitHub, and other tools as needed.
In this role, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. As part of Enterprise Data Platform team, you will help advance the adoption of data-driven insights and advanced AI analytics across multiple business domains within Crocs enterprise.
What You'll Do
- Data Modeling – Design, implement, and maintain scalable data models that support the Enterprise Data Warehouse (EDW) and analytical workloads following best practices and restrictions imposed by respective technologies.
- ETL/ELT – design and implement efficient, scalable and easy-to-manage data movement processes supporting both batch and near-real-time data streams.
- CI/CD – automate code integration, testing, and deployment using Git to ensure fast, reliable, and consistent delivery of data pipelines and ETL code.
- Engineering Best Practices – Adhere to engineering standard methodologies, including test-driven development, agile management, and continuous integration pipelines.
- Documentation – Create and maintain accurate and complete documentation of the pipelines developed.
- Interest in Learning – Stay ahead of what is happening within the BI and analytics space and demonstrate interest and desire in data science and machine learning.
- Mentoring – Provide coaching and support to data junior engineers.
- Identify and resolve issues related to data processing, data integrity, or performance in a timely manner.
- Implement automation wherever possible to streamline and optimize data engineering workflows.
What You'll Bring to the Table
- Bachelor’s degree or equivalent experience in computer science, information technology, engineering, mathematics, or equivalent technical degree.
- 6+ years in Data Engineering roles
- 3 years of direct development in Snowflake, Snowflake certifications preferred.
- Strong proficiency in Python, PySpark, and Apache Airflow.
- Experience utilizing Git version control, working with/in GitHub and GitActions.
- Experience designing data models following Kimbal Dimensional Modeling best practices.
- Experience with modern ETL/ELT tools, such as Databricks, and prior experience working in a cloud platform (Azure preferred).
- Experience with business intelligence tools, ideally Power BI, and working with Azure Data Factory preferred.
- Experience in distributed computing, using Spark and Kafka for data streaming, preferred. Experience working with SAP and Salesforce, preferred
The Company is an Equal Opportunity Employer committed to a diverse and inclusive work environment.
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or disability, or any other protected classification.
Job Category: Corporate