Pune, Maharashtra, India
Information Technology
Full-Time
RAPSYS TECHNOLOGIES PTE LTD
Overview
π We're Hiring: PySpark Data Engineer! π
We are seeking an experienced PySpark Data Engineer to design, develop, and maintain large-scale data processing systems. The ideal candidate will have expertise in Apache Spark, Python, and big data technologies to build robust data pipelines and analytics solutions.
π Location: Hyderabad, India
β° Work Mode: Flexible office & remote
πΌ Role: PySpark Data Engineer
What You'll Do
π― Design and develop scalable data pipelines using PySpark
π Optimize data processing workflows for performance and reliability
π§ Build ETL processes for large datasets across multiple sources
βοΈ Deploy and maintain data solutions on cloud platforms
π Collaborate with data scientists and analysts on data requirements
π οΈ Troubleshoot and resolve data quality issues
What We're Looking For
β 7+ years of experience in data engineering
β Strong expertise in PySpark and Apache Spark ecosystem
β Proficiency in Python, SQL, and big data technologies, Airflow, DevOps & Orchestration, Stakeholder Management, Banking/ Telecom Domain
β Experience with cloud platforms (AWS/Azure/GCP)
β Knowledge of data warehousing and distributed systems
β Strong problem-solving and analytical skills
Ready to make an impact? π Apply now and let's grow together!
We are seeking an experienced PySpark Data Engineer to design, develop, and maintain large-scale data processing systems. The ideal candidate will have expertise in Apache Spark, Python, and big data technologies to build robust data pipelines and analytics solutions.
π Location: Hyderabad, India
β° Work Mode: Flexible office & remote
πΌ Role: PySpark Data Engineer
What You'll Do
π― Design and develop scalable data pipelines using PySpark
π Optimize data processing workflows for performance and reliability
π§ Build ETL processes for large datasets across multiple sources
βοΈ Deploy and maintain data solutions on cloud platforms
π Collaborate with data scientists and analysts on data requirements
π οΈ Troubleshoot and resolve data quality issues
What We're Looking For
β 7+ years of experience in data engineering
β Strong expertise in PySpark and Apache Spark ecosystem
β Proficiency in Python, SQL, and big data technologies, Airflow, DevOps & Orchestration, Stakeholder Management, Banking/ Telecom Domain
β Experience with cloud platforms (AWS/Azure/GCP)
β Knowledge of data warehousing and distributed systems
β Strong problem-solving and analytical skills
Ready to make an impact? π Apply now and let's grow together!
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in