Bangalore, Karnataka, India
Information Technology
Full-Time
BigThinkCode
Overview
We are looking for skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. This role requires technical expertise in Python, SQL, ETL/ELT frameworks, and cloud data warehouses, along with strong collaboration skills to partner with cross-functional teams.
Company: BigThinkCode Technologies
Url
Location: Chennai (Work from office / Hybrid)
Experience: 4 - 6 years
Key Responsibilities
Regards
Skills:- Python, SQL, Spark, ETL, Apache Airflow, Amazon Redshift, Git and Data modeling
Company: BigThinkCode Technologies
Url
Location: Chennai (Work from office / Hybrid)
Experience: 4 - 6 years
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data.
- Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
- Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
- Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
- Ensure data quality, governance, and security across pipelines and storage systems.
- Document architectures, processes, and workflows for clarity and reproducibility.
- Proficiency in Python for scripting, automation, and pipeline development.
- Expertise in SQL (complex queries, optimization, and database design).
- Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, dbt, AWS Glue).
- Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
- Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
- Knowledge of version control systems (e.g., Git) and CI/CD practices.
- Experience with streaming data technologies (e.g., Kafka, Kinesis, Spark Streaming).
- Exposure to cloud platforms (AWS, GCP, Azure) and their data services.
- Understanding of data modelling (dimensional, star schema) and optimization techniques.
- Team player with a collaborative mindset and ability to mentor junior engineers.
- Strong stakeholder management skills to align technical solutions with business goals.
- Excellent communication skills to explain technical concepts to non-technical audiences.
- Proactive problem-solving and adaptability in fast-paced environments.
Regards
Skills:- Python, SQL, Spark, ETL, Apache Airflow, Amazon Redshift, Git and Data modeling
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in