Overview
Technical Lead - Data About BluePi:- BluePi Consulting partners with organizations to help them achieve higher levels of maturity. Founded in 2012, has today grown to serve several established organizations across India & Australia. Self-funded, the organization has its sales offices in Mumbai & Sydney, besides its headquarters in Gurgaon. For over a decade, BluePi has helped organizations transform their businesses by providing data driven business insights. It specializes in custom data, analytics and AI/ML solutions that help drive business outcomes for organizations. BluePi works with technology partners – Amazon Web Services (AWS), Snowflake, Google Cloud Platform (GCP), & Databricks. In this role, you'll get to... ? Partner across multiple teams to identify business problems, define methodology and implement solutions together with a team ? Implement technical roadmaps and architecture vision for data projects that align with business goals ? Lead projects and manage stakeholder expectations across multiple levels of the organization ? Make critical technical decisions with confidence, including technology selection, design patterns, and implementation approaches ? Proactively identify and manage technical debt, balancing short-term deliverables with long-term sustainability Requirements On day one we'll expect you to... ? Own the modules and take complete ownership of the project ? Understand the scope, design and business objective of the project and articulate it in the form of a design document ? Strong experience with Google Cloud Platform data services, including BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, GenAI (Gemini, Imagen, Veo) ? Experience in implementing data governance on GCP ? Familiarity with integrating GCP services with other platforms like Snowflake, and hands-on Snowflake project experience is a plus ? Experienced coder in python, SQL, ETL and orchestration tools ? Experience with containerized solutions using Google Kubernetes Engine ? Good communication skills to interact with internal teams and customers ? Expertise in pySpark(Batch and Real-time both), Kafka, SQL, Data Querying tools. ? Experience in working with a team, continuously monitoring, working as a individual contributor hand-on and helping team deliver their work as you deliver yours ? Experience in working with large volumes of data in distributed environment keeping in mind parallelism and concurrency, ensuring performant and resilient system ops ? Optimize the deployment architecture to reduce job run-times and resource utilization ? Develop and Optimize Data Warehouses given the schema design.
Job Types: Full-time, Permanent
Pay: ₹50,000.00 - ₹133,000.00 per month
Benefits:
- Health insurance
- Work from home
Work Location: Remote