Bangalore, Karnataka, India
Pharmaceuticals
Full-Time
MyCareernet
Overview
Company:Global Banking Organization
Key Skills: Python, PySpark, Spark SQL, Data Pipelines, Big Data, Palantir Foundry, Hive, Hadoop, SQL, AWS, Azure, JavaScript, HTML, CSS, Analytics, Data Visualization, Reinsurance Domain
Roles and Responsibilities:
- Implement data pipelines and analytics solutions to support key decision-making in Life & Health Reinsurance.
- Work closely with Product Owners and Engineering Leads to understand requirements and evaluate implementation efforts.
- Develop and maintain scalable data transformation pipelines to handle large data volumes.
- Implement analytics models and build data visualizations to provide actionable insights.
- Collaborate within a global development team to design and deliver cutting-edge solutions using Big Data and Machine Learning technologies.
Experience Requirement:
- 0-3 years of experience working with large-scale software systems in data engineering or analytics.
- Hands-on experience building scalable data transformation pipelines using Python/PySpark.
- Proficiency in SQL, preferably Spark SQL, for managing and transforming large datasets.
- Exposure to working with enterprise data platforms and distributed computing environments like Spark, Hive, or Hadoop.
- Practical knowledge of developing solutions in cloud environments such as AWS or Azure.
- Familiarity with Palantir Foundry is a strong advantage.
- Experience with JavaScript/HTML/CSS is a plus.
- Demonstrated analytical and problem-solving skills in data-centric projects.
- Ability to work in a global, cross-functional team and effectively communicate complex data insights.
Education: Any Post Graduation, Any Graduation.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in