Information Technology
Full-Time
InspironLabs
Overview
Key Strengths
You will collaborate with data scientists, analysts, and engineering teams to develop scalable, integrated data solutions.
You will also mentor junior engineers and oversee project delivery timelines, ensuring high-quality outputs aligned with business goals.
Key Responsibilities
- AI & GenAI Focus : Harnessing AI and Generative AI to deliver smarter, future-ready solutions.
- Scalable Tech Stack : Building systems designed for high performance and resilience.
- Proven Enterprise Experience : Deploying robust solutions globally across diverse Overview :
You will collaborate with data scientists, analysts, and engineering teams to develop scalable, integrated data solutions.
You will also mentor junior engineers and oversee project delivery timelines, ensuring high-quality outputs aligned with business goals.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines leveraging to process large volumes of data efficiently.
- Implement real-time and batch data streaming solutions using Kafka for reliable data ingestion and processing.
- Write optimized SQL queries and Python/PySpark code to transform, aggregate, and analyze data across distributed systems.
- Architect and build cloud-based data solutions on platforms such as AWS, GCP, or Azure, utilizing services like Redshift, BigQuery, Glue, or Data Factory.
- Develop and manage data workflows and orchestration pipelines using Airflow and DBT, ensuring robust scheduling, monitoring, and error handling.
- Optimize the performance, reliability, and scalability of data storage and retrieval within data lakes and warehouses.
- Lead code reviews, implement best practices for code quality, and ensure data security and compliance standards are met.
- Collaborate closely with data scientists to enable efficient model training and deployment through clean, accessible data pipelines.
- Mentor and guide junior engineers on technologies, coding standards, and architectural decisions.
- Drive continuous improvement initiatives for data infrastructure and engineering practices, incorporating new technologies as Skills & Experience :
- 4+ years of experience in data engineering roles, including hands-on technical leadership.
- Strong expertise with Big Data technologies such as Spark, Hadoop, and Kafka.
- Proficient in SQL and Python programming.
- Experience working with cloud data platforms (AWS, GCP, or Azure).
- Hands-on experience with orchestration and workflow management tools like Airflow and DBT.
- Bachelors or Masters degree in Computer Science, Engineering, or a related We Offer :
- Opportunity to work on cutting-edge AI and GenAI powered projects.
- Dynamic, collaborative environment fostering innovation and professional growth.
- Competitive salary and benefits package
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in