Overview
Experience: 5+ Years
Location: Hyderabad /Hybrid
Employment Type: Full-time
About ServCrust
ServCrust is a fast-growing D2C SaaS platform reshaping the construction materials and logistics ecosystem using AI-led supply chain intelligence, geo-analytics, and credit-risk insights.
We process real-time order, logistics, and IoT data across multiple states to deliver insights on pricing, supplier performance, delivery optimization, and more
Role Overview
We’re looking for an experienced Data Engineer to build and scale our end-to-end data pipelines that power dashboards, AI models, and decision systems across our microservices-driven architecture.
You’ll collaborate closely with Data Science, Product, and Engineering teams to ensure our logistics, geo-commerce, and operations data remains clean, reliable, and highly actionable.
What You’ll Do
- Build and maintain data pipelines for logistics, pricing, and operational intelligence using Kafka streaming and structured ETL workflows.
- Design and optimize data models supporting dashboards, forecasting engines, and AI/ML use cases.
- Integrate data from PostgreSQL, IoT devices, microservices, and external APIs into unified data warehouses.
- Implement data quality checks, schema evolution, and lineage tracking for reliable analytics.
- Work with backend teams to embed data services into Spring Boot microservices using AWS-native tools.
- Manage AWS data infrastructure: RDS, Glue, S3, Athena, Redshift, QuickSight.
- Contribute to real-time dashboards for delivery tracking, credit monitoring, and supplier insights.
- Ensure data security, governance, and observability aligned with ISO 27001 & SOC 2 practices.
- Support data architecture planning for scaling AI-enabled OMS and credit systems.
- Bachelor’s/Master’s in CS, Data Engineering, or related field.
- 5+ years of experience in data engineering & pipeline development.
- Strong expertise in SQL (PostgreSQL) and Python (Pandas / PySpark).
- Hands-on experience with Kafka, AWS (Glue, S3, RDS, Athena, Lambda), and ETL orchestration tools (Airflow / Step Functions).
- Proficiency in building data marts and OLAP models.
- Experience with API-based ingestion and microservice data flows.
- Understanding of data governance, RBAC, and encryption.
- Familiarity with Docker and GitHub CI/CD
- Experience with geo-spatial data (H3, PostGIS, GeoPandas).
- Exposure to AI/ML data pipelines or analytics rule engines (Drools).
- Understanding of streaming analytics and lakehouse architectures.
- Knowledge of BI tools: QuickSight, Metabase, Power BI.
- Strong ownership and problem-solving mindset.
- Effective communication with cross-functional teams.
- Comfortable in a high-growth, fast-paced startup environment.
- Work on real-world logistics, finance, and IoT data impacting thousands of businesses.
- Be part of a scalable, AI-driven SaaS ecosystem transforming the construction supply chain.
- Growth opportunities into Data Platform Lead roles.