Overview
About Trianz
Trianz is a global technology services and platforms firm, accelerating digital transformations for Fortune 100 and emerging enterprises worldwide. We specialize in Data & Analytics, Digital Experiences, Cloud Infrastructure, and Cybersecurity, helping organizations navigate rapid market shifts, talent gaps, and economic uncertainties.
Our proprietary platforms — Concierto, Avrio, and Pulse — power innovation-led transformations in collaboration with leading cloud providers. With a strong focus on client success and measurable value delivery, Trianz consistently outpaces industry growth and remains a trusted transformation partner.
About the Role
As a Data Architect, you are core to the D&AI (Data & AI) Practice’s success. Data is foundational to everything we do, and you are accountable for defining and delivering best-in-class Databricks data management solutions across all major cloud platforms. This is a senior role with high visibility and reporting to the D&AI Practice Leadership.
Responsibilities
- Architectural Design: Architect secure, scalable, highly performant data engineering and management solutions, including data warehouses, data lake, ELT / ETL and real-time data engineering / pipeline solutions. Support Principal Data Architect in defining and maintaining Practice reference data engineering and data management architectures.
- Databricks Implementation: Design and manage scalable end-to-end data solutions leveraging native Databricks capabilities including Data Engineering, Delta Live Tables, Apache Spark, Structured Streaming, Notebooks, Git CI/CD, Databricks, supported file types, Libraries.
- Data Warehousing: Databricks SQL, best practice data modeling. Delta Lake, Delta Lake + Delta Live tables, incremental and streaming workloads, table history, file management, API utilization. Databricks Data Unity Catalog access control, lineage, Delta Sharing, identity management.
- Hyperscaler Design: Competently leverage data-related cloud platform (AWS or Azure or GCP) capabilities to architect and develop end-to-end data engineering and data management solutions.
- Client Engagement: Regular collaboration and partnership with clients to understand their challenges and needs then translate requirements into data solutions that drive customer value. Support proposal development.
- Data Modeling: Create and maintain conceptual, logical, and physical data models that support both transactional and analytical needs. Ensure data models are optimized for performance and scalability.
- Creativity: Be an out-of-the-box thinker and passionate about applying your skills to new and existing solutions alike while always demonstrating a customer-first mentality.
Required Skills
- 12+ years hands-on data solution architecture and implementation experience on modern cloud platforms (AWS preferred) including microservice and event-driven architectures.
- Databricks Professional Data Engineer certification.
- Databricks Platform Architect accreditation (AWS or Azure).
- Databricks Platform Administrator accreditation.
- Hands-on experience with Databricks Lakehouse implementations, Delta Live Tables and building scalable data pipelines including real-time (Kafka, Kinesis) in addition to Databricks Structured Streaming.
- Demonstrated, holistic experience and competence with Databricks concepts including Delta tables, Apache Spark, Structured Streaming, Notebooks, Git CI/CD, Libraries Databricks SQL.
- An architectural certification on either AWS, Azure or GCP.
- Experience with ML orchestration tools such as MLflow and Kubeflow.
- Containerization proficiency (Docker, Kubernetes) to deploy data pipelines and models.
- Practical experience with end-to-end data engineering and data management supporting functions including data modeling (conceptual, logical & physical), BI & analytics, data governance, data quality, data security / privacy / compliance, IAM, performance optimization.
- Advanced SQL and data profiling. Python or Scala.
- Strong communication skills with the ability to convey technical concepts to non-technical users.
- Strong, self-management skills demonstrating ability to multitask and self-manage goals and activities.
Preferred Skills
- Experience with ML orchestration tools such as MLflow and Kubeflow.
- Containerization proficiency (Docker, Kubernetes) to deploy data pipelines and models.