Hyderabad, Telangana, India
Information Technology
Full-Time
Technogen India Pvt. Ltd.
Overview
What Your Impact Will Be
- Lead the development of scalable, secure, and high-performing data integration pipelines for structured and semi-structured data using Google BigQuery.
- Design and develop scalable data integration pipelines to ingest structured and semi-structured data from enterprise systems (e.g., ERP, CRM, E-commerce, Order Management) into a centralized cloud data warehouse using Google BigQuery.
- Build analytics-ready pipelines that transform raw data into trusted, curated datasets for reporting, dashboards, and advanced analytics.
- Implement transformation logic using DBT to create modular, maintainable, and reusable data models that evolve with business needs.
- Apply BigQuery best practicesincluding partitioning, clustering, and query optimizationto ensure high performance and scalability.
- Automate and monitor complex data workflows using Airflow/Cloud Composer, ensuring dependable pipeline orchestration and job execution.
- Develop efficient, reusable Python and SQL code for data ingestion, transformation, validation, and performance tuning across the pipeline lifecycle.
- Establish robust data quality checks and testing strategies to validate both technical accuracy and alignment with business logic.
- Partner with architects and Technical leads to establish best practices, scalable frameworks, and reference implementations across projects.
- Collaborate with cross-functional teamsincluding data analysts, BI developers, and product ownersto understand integration needs and deliver impactful, business-aligned data solutions.
- Leverage modern ETL platforms such as Ascend.io, Databricks, Dataflow, or Fivetran to accelerate development and improve observability and orchestration.
- Contribute to technical documentation, CI/CD workflows, and monitoring processes to drive transparency, reliability, and continuous improvement across the data engineering ecosystem.
- Mentor junior engineers, conduct peer code reviews, and lead technical Were Looking For :
- Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or related technical field.
- Minimum 4+ years of hands-on experience in data engineering with strong expertise in data warehousing, pipeline development, and analytics on cloud experience in :
- Google BigQuery for large-scale data warehousing and analytics.
- Python for data processing, orchestration, and scripting.
- SQL for data wrangling, transformation, and query optimization.
- DBT for developing modular and maintainable data transformation layers.
- Airflow / Cloud Composer for workflow orchestration and scheduling.
- Proven experience building enterprise-grade ETL/ELT pipelines and scalable data architectures.
- Strong understanding of data quality frameworks, validation techniques, and governance processes.
- Proficiency in Agile methodologies (Scrum/Kanban) and managing IT backlogs in a collaborative, iterative experience with :
- Tools like Ascend.io, Databricks, Fivetran, or Dataflow.
- Data cataloging/governance tools (e.g., Collibra).
- CI/CD tools, Git workflows, and infrastructure automation.
- Real-time/event-driven data processing using Pub/Sub, Kafka, or similar platforms.
- Strategic problem-solving skills and ability to architect innovative solutions.
- Ability to adapt quickly to new technologies and lead adoption across teams.
- Excellent communication skills and ability to influence cross-functional teams.
- Good experience on Agile Methodologies like Scrum, Kanban, and managing IT backlog.
- Be a go-to expert for data technologies and solutions
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in