
Overview
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
As a Lead Software Engineer at JPMorgan Chase within Corporate Technology, you play a vital role in an agile team dedicated to enhancing, building, and delivering reliable, market-leading technology products in a secure, stable, and scalable manner. As a key technical contributor, you are tasked with implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals.
Job responsibilities
- Develop appropriate level designs and ensure consensus from peers where necessary.
- Collaborate with software engineers and cross-functional teams to design and implement deployment strategies using AWS Cloud and Databricks pipelines.
- Work with software engineers and teams to design, develop, test, and implement solutions within applications.
- Engage with technical experts, key stakeholders, and team members to resolve complex problems effectively.
- Understand leadership objectives and proactively address issues before they impact customers.
- Design, develop, and maintain robust data pipelines to ingest, process, and store large volumes of data from various sources.
- Implement ETL (Extract, Transform, Load) processes to ensure data quality and integrity using tools like Apache Spark and PySpark.
- Monitor and optimize the performance of data systems and pipelines.
- Implement best practices for data storage, retrieval, and processing
- Maintain comprehensive documentation of data systems, processes, and workflows.
- Ensure compliance with data governance and security policies
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years applied experience
- Formal training or certification in AWS/Databricks with 10+ years of applied experience.
- Expertise in programming languages such as Python and PySpark.
- 10+ years of professional experience in designing and implementing data pipelines in a cloud environment.
- Proficient in design, architecture, and development using AWS Services, Databricks, Spark, Snowflake, etc.
- Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform.
- Familiarity with container and container orchestration technologies such as ECS, Kubernetes, and Docker.
- Ability to troubleshoot common Big Data and Cloud technologies and issues.
- Practical cloud native experience
Preferred qualifications, capabilities, and skills
5+ years of experience in leading and developing data solutions in the AWS cloud.
- 10+ years of experience in building, implementing, and managing data pipelines using Databricks on Spark or similar cloud technologies