Pune, Maharashtra, India
Information Technology
Full-Time
SourcingXPress
Overview
Company: Red Elk
Website: Visit Website
Business Type: Startup
Company Type: Product & Service
Business Model: B2B
Funding Stage: Bootstrapped
Industry: Games
Salary Range: ₹ 30-35 Lacs PA
Job Description
Department: Engineering
At Red Elk Studious (www.redelk.co), we’re crafting immersive and unforgettable gaming experiences. Our latest endeavor is a partnership with Battle Creek Games, USA on next-gen open sandbox hunting simulation game that pushes the boundaries of realism, exploration, and player freedom. We're looking for a seasoned Senior Game Designer with a passion for open-world systems and player-driven narratives to help shape this ambitious project.
Role Overview
We are seeking an experienced and motivated Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines, managing our data warehouse infrastructure, and supporting analytics initiatives across the organization. You will work closely with data scientists, analysts, and other stakeholders to ensure data quality, integrity, and accessibility, enabling the organization to make data-driven decisions.
Responsibilities
As a data infrastructure expert:
○ 8+ years of experience in data engineering, with a strong background in systems architecture, distributed systems, cloud infrastructure, or a related field.
○ Proven experience building and managing data pipelines, data warehouses, and ETL processes.
○ Expertise in data pipeline tools and frameworks (e.g., AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi, dbt).
○ Hands-on experience with cloud platforms and their data services (e.g., AWS, Azure, Google Cloud Platform).
○ Proficiency in programming languages such as Python, Java, or Scala for data manipulation and automation.
○ Knowledge of data modeling, schema design, and data governance principles.
○ Familiarity with distributed data processing frameworks like Apache Spark, Hadoop, or similar.
○ Experience with BI tools (e.g., Tableau, Power BI, Looker)
○ Excellent communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders.
○ Proactive mindset with the ability to work independently and handle multiple tasks in a fast-paced environment.
Why Join Us
Website: Visit Website
Business Type: Startup
Company Type: Product & Service
Business Model: B2B
Funding Stage: Bootstrapped
Industry: Games
Salary Range: ₹ 30-35 Lacs PA
Job Description
Department: Engineering
At Red Elk Studious (www.redelk.co), we’re crafting immersive and unforgettable gaming experiences. Our latest endeavor is a partnership with Battle Creek Games, USA on next-gen open sandbox hunting simulation game that pushes the boundaries of realism, exploration, and player freedom. We're looking for a seasoned Senior Game Designer with a passion for open-world systems and player-driven narratives to help shape this ambitious project.
Role Overview
We are seeking an experienced and motivated Data Engineer to join our dynamic data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines, managing our data warehouse infrastructure, and supporting analytics initiatives across the organization. You will work closely with data scientists, analysts, and other stakeholders to ensure data quality, integrity, and accessibility, enabling the organization to make data-driven decisions.
Responsibilities
As a data infrastructure expert:
- Design and Develop Data Pipelines: Architect, develop, and maintain robust and scalable data pipelines for ingesting, processing, and transforming large volumes of data from multiple sources in real-time and batch modes.
- Data Warehouse Management: Manage, optimize, and maintain the data warehouse infrastructure, ensuring data integrity, security, and availability. Oversee the implementation of best practices for data storage, partitioning, indexing, and schema design.
- ETL Processes: Design and build efficient ETL (Extract, Transform, Load) processes to move data across various systems while ensuring high performance, reliability, and scalability.
- Data Integration: Integrate diverse data sources (structured, semi-structured, and unstructured data) into a unified data model that supports analytics and reporting needs.
- Support Analytics and BI: Collaborate with data analysts, data scientists, and business intelligence teams to understand data requirements and provide data sets, models, and solutions that support their analytics needs.
- Data Quality and Governance: Establish and enforce data quality standards, governance policies, and best practices. Implement monitoring and alerting to ensure data accuracy, consistency, and completeness.
- Operational Excellence: Drive the development of automated systems for provisioning, deployment, monitoring, failover, and recovery. Implement systems to monitor key performance metrics, logs, and alerts with a focus on automation and reducing manual intervention.
- Cross-functional Collaboration: Work closely with product, engineering, and QA ]teams to ensure the infrastructure supports and enhances development workflows and that services are deployed and operated smoothly at scale.
- Incident Management & Root Cause Analysis: Act as a first responder to data production issues, leading post-mortems and implementing long-term solutions to prevent recurrence. Ensure all incidents are handled promptly with a focus on minimizing impact.
- Security & Compliance: Ensure our infrastructure is designed with security best practices in mind, including encryption, access control, and vulnerability scanning.
- Continuous Improvement: Stay up-to-date with industry trends, technologies, and best practices, bringing innovative ideas into the team to improve reliability, performance, and scale.
- Education & Experience:
○ 8+ years of experience in data engineering, with a strong background in systems architecture, distributed systems, cloud infrastructure, or a related field.
○ Proven experience building and managing data pipelines, data warehouses, and ETL processes.
- Technical skills:
○ Expertise in data pipeline tools and frameworks (e.g., AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi, dbt).
○ Hands-on experience with cloud platforms and their data services (e.g., AWS, Azure, Google Cloud Platform).
○ Proficiency in programming languages such as Python, Java, or Scala for data manipulation and automation.
○ Knowledge of data modeling, schema design, and data governance principles.
○ Familiarity with distributed data processing frameworks like Apache Spark, Hadoop, or similar.
○ Experience with BI tools (e.g., Tableau, Power BI, Looker)
- Soft Skills:
○ Excellent communication and collaboration skills, with the ability to work effectively with technical and non-technical stakeholders.
○ Proactive mindset with the ability to work independently and handle multiple tasks in a fast-paced environment.
Why Join Us
- Impact: Lead critical initiatives that directly influence the stability, scalability, and security of our platform and infrastructure while positively impacting the business through data insights.
- Growth: Grow your leadership skills while working on complex technical challenges that require creative problem-solving.
- Culture: Join a collaborative, diverse, and high-performing team that values continuous learning and operational excellence.
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in