Pune, Maharashtra, India
Information Technology
Other
QuantumBricks

Overview
Key Responsibilities:
- Data Pipeline Development: Design, build, and maintain scalable and efficient data pipelines for data ingestion, transformation, and storage.
- ETL (Extract, Transform, Load): Implement ETL processes to ensure that data is processed and integrated from multiple sources, ensuring accuracy and integrity.
- Data Storage Management: Work with relational and non-relational databases (e.g., SQL, NoSQL) to store and manage large-scale datasets.
- Data Integration: Integrate data from various internal and external systems, such as APIs, cloud platforms, and data warehouses (e.g., Amazon Redshift, Google BigQuery, Snowflake).
- Data Quality & Validation: Ensure the data pipeline is robust, with proper error handling and validation to maintain data quality.
- Automation & Optimization: Automate manual data-related tasks, streamline processes, and optimize data storage, queries, and pipelines for better performance.
- Collaboration: Work closely with data analysts, data scientists, and software engineers to understand their data needs and provide the necessary infrastructure and support.
- Documentation & Best Practices: Document all data workflows, architectures, and processes; follow best practices for data security, privacy, and compliance.
- Data Monitoring & Maintenance: Continuously monitor the performance of data systems and perform maintenance as required to ensure data availability and system uptime.
- Stay Updated with New Technologies: Stay current with emerging technologies in data engineering and data science, and proactively introduce innovative solutions to improve data systems.
Required Skills & Qualifications:
- Educational Background: Bachelor's or Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- Experience: 6+ years of experience in data engineering or a related field.
- Technical Skills:
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data technologies (e.g., Hadoop, Spark, Kafka).
- Strong knowledge of SQL and NoSQL databases.
- Hands-on experience with cloud platforms like AWS, Google Cloud, or Azure.
- Familiarity with data warehousing solutions (e.g., Redshift, Snowflake, BigQuery).
- Experience with ETL frameworks (e.g., Apache Airflow, Talend, Informatica).
- Knowledge of data modeling, schema design, and data partitioning techniques.
- Analytical Skills: Strong problem-solving skills and ability to troubleshoot and optimize data systems.
- Communication: Ability to communicate complex technical concepts to non-technical stakeholders.
Job Types: Full-time, Contractual / Temporary, Freelance
Pay: ₹1,230,000.00 - ₹2,440,000.00 per year
Benefits:
- Work from home
Schedule:
- Night shift
- UK shift
- US shift
Work Location: Remote
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in