Bangalore, Karnataka, India
Information Technology
Full-Time
UST
Overview
Role Description
Data Engineer – Job Description
We are looking for a highly skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. This role requires expertise in Python, PySpark, SQL, and modern cloud platforms such as Snowflake. The ideal candidate will collaborate with business stakeholders and analytics teams to ensure the efficient collection, transformation, and delivery of data to power insights and decision-making.
Responsibilities
Python,Sql,Cloud Platform
Data Engineer – Job Description
We are looking for a highly skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. This role requires expertise in Python, PySpark, SQL, and modern cloud platforms such as Snowflake. The ideal candidate will collaborate with business stakeholders and analytics teams to ensure the efficient collection, transformation, and delivery of data to power insights and decision-making.
Responsibilities
- Understand business requirements, system designs, and security standards.
- Collaborate with SMEs to analyze existing processes, gather functional requirements, and identify improvements.
- Build and streamline data pipelines using Python, PySpark, SQL, and Spark from various data sources.
- Support data cataloging and knowledge base development.
- Develop tools for analytics and data science teams to optimize data product consumption.
- Enhance data system functionality in collaboration with data and analytics experts.
- Communicate insights using statistical analysis, data visualization, and storytelling techniques.
- Manage technical and business documentation for all data engineering efforts.
- Participate in hands-on development and coordinate with onshore/offshore teams.
- 5+ years of experience building data pipelines on on-premise and cloud platforms (e.g., Snowflake).
- Strong expertise in Python, PySpark, and SQL for data ingestion, transformation, and automation.
- Experience in developing Python-based applications with visualization libraries such as Plotly and Streamlit.
- Solid knowledge of data engineering concepts and practices including metadata management and data governance.
- Proficient in using cloud-based data warehousing and data lake environments.
- Familiarity with ELT/ETL tools like DBT and Cribl.
- Experience with incremental data capture, stream ingestion, and real-time data processing.
- Background in cybersecurity, IT infrastructure, or software systems.
- 3+ years of experience in cloud-based data warehouse and data lake architectures.
- Hands-on experience with data visualization tools (e.g., Tableau, Plotly, Streamlit).
- Strong communication skills and ability to translate complex data into actionable insights.
- Python
- PySpark
- SQL
- Snowflake (or other cloud data platforms)
- Plotly, Streamlit, Flask, Dask
- ELT/ETL tools (DBT, Cribl)
- Data visualization (Tableau, Plotly)
- Metadata management & data governance
- Stream processing & real-time data ingestion
Python,Sql,Cloud Platform
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in