Overview
What The Candidate Will Need / Bonus Points
---- What the Candidate Will Do ----
---- What the Candidate Will Do ----
- Partner with engineers, analysts, and product managers to define technical solutions that support business goals
- Contribute to the architecture and implementation of distributed data systems and platforms
- Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost
- Serve as a thought leader and mentor in data engineering best practices across the organization
- 7+ years of hands-on experience in software engineering with a focus on data engineering
- Proficiency in at least one programming language such as Python, Java, or Scala
- Strong SQL skills and experience with large-scale data processing frameworks (e.g., Apache Spark, Flink, MapReduce, Presto)
- Demonstrated experience designing, implementing, and operating scalable ETL pipelines and data platforms
- Proven ability to work collaboratively across teams and communicate technical concepts to diverse stakeholders
- Deep understanding of data warehousing concepts and data modeling best practices
- Hands-on experience with Hadoop ecosystem tools (e.g., Hive, HDFS, Oozie, Airflow, Spark, Presto)
- Familiarity with streaming technologies such as Kafka or Samza
- Expertise in performance optimization, query tuning, and resource-efficient data processing
- Strong problem-solving skills and a track record of owning systems from design to production
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in