Overview
About Snapmint:
India’s booming consumer market has over 300 million credit-eligible consumers, yet only 35million actively use credit cards. At Snapmint, we are building a better alternative to credit cards that lets consumers buy now and pay later for a wide variety of products, be it shoes, clothes, fashion accessories, clothes or mobile phones. We firmly believe that an enduring financial services business must be built on the bedrock of providing honest, transparent and fair terms.
Founded in 2017, today we are the leading online zero-cost EMI provider in India. We have served
over 10M consumers across 2,200 cities and are doubling year on year. Our founders are serial
entrepreneurs and alumni of IIT Bombay and ISB with over two decades of experience across
leading organizations like Swiggy, Oyo, Maruti Suzuki and ZS Associates before successfully
scaling and exiting businesses in patent analytics, ad-tech and bank-tech software services.
Key Responsibilities:
- Design, build, and manage real-time data pipelines using tools like Apache Kafka, Apache Flink, Apache Spark Streaming.
- Optimize data pipelines for performance, scalability, and fault-tolerance.
- Perform real-time transformations, aggregations, and joins on streaming data.
- Collaborate with data scientists to onboard new features and ensure they're discoverable,
- documented, and versioned.
- Optimize feature retrieval latency for real-time inference use cases.
- Ensure strong data governance: lineage, auditing, schema evolution, and quality checks
- using tools such as dbt, and OpenLineage.
Requirements:
- Bachelor's degree in Engineering.
- Strong programming skills in Python, Java, or Scala and proficient in SQL.
- Solid understanding of data modeling, data warehousing concepts, and the differences
- between OLTP and OLAP workloads.
- Experience ingesting and processing various data formats, including semi-structured
- (JSON, Avro), unstructured, and document-based data from sources like NoSQL
- databases (e.g., MongoDB), APIs, and event tracking platforms (e.g., PostHog).
- Hands-on experience with Change Data Capture (CDC) tools such as Debezium or AWS
- DMS for replicating data from transactional databases.
- Proven experience designing and building scalable data lakes or lakehouse architectures
- on platforms like Databricks.
- Hands-on experience with modern open table formats such as Delta Lake, Apache
- Iceberg, or Apache Hudi.
- Hands-on experience with real-time streaming technologies like Kafka, Flink, and Spark
- Streaming.
- Proficiency with data pipeline orchestration tools like Apache Airflow.
- Exposure to event-driven microservices architecture.
- 2+ years of experience in an Indian startup/tech company.
- Strong written and verbal communication skills.
Good to have:
- Familiarity with cloud data warehouse systems like BigQuery or Snowflake.
- Experience with real-time analytical databases like ClickHouse.
- Familiarity with designing, building, and maintaining feature store infrastructure to support machine learning use cases.
Location: Bangalore (Marathahalli)
Working days: 5 days