Pune, Maharashtra, India
Information Technology
Full-Time
DS Group
Overview
Job Description – Data Architect (6–10 years)
Role Summary
We are looking for an experienced Data Architect with 6–10 years of expertise in data engineering, data management, and architecture. This role will define the strategy, design, and implementation of enterprise-scale data platforms, ensuring data availability, governance, security, and scalability across the organization. The Data Architect will collaborate with technology and business stakeholders to align data initiatives with strategic goals.
Key Responsibilities
- Define the enterprise data architecture roadmap, covering data modeling, integration, quality, and governance.
- Architect and implement data platforms including data warehouses, lakes, and lakehouses (e.g., Snowflake, BigQuery, Redshift and Databricks).
- Establish standards for data modeling, schema design, metadata management, and lineage tracking.
- Lead the design and development of data integration frameworks, covering ETL/ELT pipelines, APIs, streaming, and real-time data delivery.
- Ensure data governance, compliance, privacy, and security frameworks are embedded in all data platforms.
- Partner with data engineering, analytics, and product teams to build scalable data products and pipelines.
- Optimize enterprise-wide data storage, retrieval, and processing to balance performance and cost.
- Collaborate with AI/ML and business intelligence teams to enable advanced analytics and AI readiness.
Required Skills & Qualification
- s6–10 years of proven experience in data architecture, engineering, or solution design
- .Strong expertise in Data Lake creation (e.g., Databricks Delta Lake, Snowflake, and Azure Data Lake Storage etc)
- .Strong expertise in relational databases (Oracle, PostgreSQL, MySQL) and NoSQL technologies (MongoDB, Cassandra, DynamoDB)
- .Proficiency in SQL and programming (Python, Scala, or Java)
- .Deep understanding of data modeling techniques (dimensional, relational, document, graph)
- .Experience with big data frameworks (Spark, Hadoop, Hive) and cloud-native data platforms (AWS Redshift, GCP BigQuery, Azure Synapse)
- .Strong grounding in data governance, data quality, and metadata management
- .Familiarity with data orchestration tools (Airflow, Dagster, Luigi)
- .Cloud experience in AWS, GCP, or Azure for building large-scale distributed systems
- .Bachelor’s or Master’s degree in Computer Science, Engineering or related field
.
Preferred Qualificatio
- nsExperience with real-time streaming frameworks (Kafka, Flink, Kinesis, Pub/Sub
- ).Knowledge of data security frameworks, compliance (GDPR, HIPAA, DPDPA), and role-based access contro
- l.Familiarity with containerization and orchestration (Docker, Kubernetes
- ).Exposure to AI/ML pipelines and data readiness for MLOp
- s.Contributions to data standards, open-source projects, or industry best practice
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in