Pune, Maharashtra, India
Information Technology
Full-Time
Finkraft
Overview
At Finkraft.ai, we’re not just building a company — we’re reshaping how enterprises manage travel spends and GST credit. Trusted by 100+ Fortune 500 companies, we’re scaling fast and looking for someone exceptional to join us at the very heart of our journey.
What you’ll do
- Design & build ETL/ELT for batch + (where needed) streaming using PySpark.
- Orchestrate with a scheduler; add retries/alerts and simple SLAs.
- Own data modeling: craft ERDs; choose star/snowflake/data-vault patterns; design for performance (partitioning, clustering, distribution, indexing).
- Be architecture-aware (warehouse, lakehouse, lambda/kappa) and pick the right fit.
- Work with formats/schemas: Parquet, Avro, JSON (schema evolution, compression).
- Build on AWS (must-have); exposure to Azure/GCP is a plus.
- Deliver to Snowflake / BigQuery / Redshift; tune for cost & speed.
- Add traceability: data-quality checks, lineage, freshness SLAs, alerting.
- Own pipelines end-to-end for features/products; document; mentor juniors.
Who you are
- 3–5 yrs data engineering with Python (you code).
- Strong PySpark; solid Kafka/Beam fundamentals are a plus.
- Production scheduling/orchestration experience; dbt familiarity welcome.
- Deep in dimensional modeling (star/snowflake), data vault basics, and normalization vs denormalization trade-offs.
- Performance-first mindset for large/bulky data; excellent SQL; pragmatic with Mongo/NoSQL when needed.
Nice to have
- Azure Data Factory / GCP Dataflow; Kafka Connect; Delta/Iceberg; warehouse tuning stories.
To take the next step, please fill out this short form so we can get to know you and your work better:
👉: https://finkraft.typeform.com/to/F1e4Fv4D
Once submitted, Our team will review your application and connect with you on the next steps.
Looking forward to your submission!
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in