Overview
About Snapmint:
India’s booming consumer market has over 300 million credit-eligible consumers, yet only 35million actively use credit cards. At Snapmint, we are building a better alternative to credit cards that lets consumers buy now and pay later for a wide variety of products, be it shoes, clothes, fashion accessories, clothes or mobile phones. We firmly believe that an enduring financial services business must be built on the bedrock of providing honest, transparent and fair terms.
Founded in 2017, today we are the leading online zero-cost EMI provider in India. We have served over 10M consumers across 2,200 cities and are doubling year on year. Our founders are serial entrepreneurs and alumni of IIT Bombay and ISB with over two decades of experience across leading organizations like Swiggy, Oyo, Maruti Suzuki and ZS Associates before successfully scaling and exiting businesses in patent analytics, ad-tech and bank-tech software services.
Key Responsibilities:
- Design, build, and manage real-time data pipelines using tools like Apache Kafka, Apache Flink, Apache Spark Streaming
- Optimize data pipelines for performance, scalability, and fault-tolerance.
- Perform real-time transformations, aggregations, and joins on streaming data
- Collaborate with data scientists to onboard new features and ensure they’re discoverable, documented, and versioned.
- Optimize feature retrieval latency for real-time inference use cases.
- Ensure strong data governance: lineage, auditing, schema evolution, and quality checks using tools such as as dbt, and OpenLineage
Requirements:
- Bachelor’s degree in Engineering
- Strong programming skills in Python, Java, or Scala and proficient in SQL.
- Hands-on experience with Kafka, Flink, Spark Streaming
- Proficiency with data pipeline orchestration tools
- Exposure to event-driven microservices architecture.
- 2+ years of experience in an Indian startup/ tech company
- Strong written and verbal communication skills
- Good to have - familiarity with data warehouse/lake systems like BigQuery, Snowflake or Delta Lake.
- Good to have - familiarity with designing, building, and maintaining feature store infrastructure to support machine learning use cases.
Location: Bangalore (Marathahalli)
Working days: 5 days/ week