Overview
Title: Staff Data Software Engineer
Reporting to: Director, Data & AI
Location: Bengaluru (Bangalore)
Opportunity
Get Well is seeking a skilled and motivated Staff Data Software Engineer to help build and optimize our cloud-native data platform supporting AI, analytics, and clinical applications. You will play a key role in developing secure and scalable data pipelines and collaborating with cross-functional teams to deliver high-quality data solutions in a regulated healthcare environment.
This is a hands-on engineering role ideal for someone who enjoys building data infrastructure, working with modern cloud data platforms, and contributing to the architecture and performance of large-scale systems.
Responsibilities
Data Engineering & Platform Development
- Build and maintain reliable, scalable data pipelines for batch and streaming data.
- Work with technologies like Apache Spark, Databricks, Snowflake, Airflow, and dbt to ingest, transform, and manage data across cloud environments.
- Contribute to data modeling, quality checks, and data optimization initiatives.
- Implement CI/CD pipelines, monitoring, and observability tools to ensure platform reliability.
Data Quality, Security & Compliance
- Support data governance policies in collaboration with compliance and security teams.
- Implement access control, encryption, and audit logging aligned with HIPAA, GDPR, and internal data policies.
- Contribute to metadata management, data cataloging, and lineage tracking practices.
Collaboration
- Work closely with data scientists, AI engineers, product managers, and clinical informatics teams to understand data requirements and deliver solutions.
- Participate in design and code reviews; contribute to team standards and best practices.
- Help integrate healthcare data sources (FHIR, HL7, CCDA) into the platform with guidance from senior team members.
Learning & Innovation
- Stay current on data engineering tools and trends in cloud-native architectures.
- Prototype new tools and frameworks (e.g., Delta Lake, Apache Iceberg) under guidance to improve platform capabilities.
Qualifications
Education & Experience
- Bachelor's or Master's degree in Computer Science, Engineering, or related field.
- 8-12 years of experience in software or data engineering roles, with at least 3 years working on large-scale data systems.
Technical Skills
- Proficiency in Python, SQL, and Spark or equivalent distributed processing frameworks.
- Experience working with cloud platforms like AWS, Azure, GCP, or Databricks/Snowflake.
- Solid understanding of data modeling, performance tuning, and cost optimization for cloud-native data pipelines.
- Familiarity with healthcare data formats (FHIR, HL7, etc.) is a plus but not mandatory.
Professional Attributes
- Strong problem-solving and debugging skills.
- Effective communication skills for collaborating across engineering and non-technical teams.
- Self-starter who takes ownership and delivers reliable, well-tested solutions.