Overview
Data Engineer
Experience: 3 - 4 Years Exp
Salary : competitive
Preferred Notice Period: Within 45 Days
Opportunity Type: Hybrid (Mumbai)
Placement Type: Permanent
(*Note: This is a requirement for one of Uplers' Clients)
Must have skills required :
Python OR SQL OR PostgreSQL OR Mongodb (NoSql), AWS OR ADLS Gen2 (Azure)
Living Things (One of Uplers' Clients) is Looking for:
Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you.
Role Overview Description
Job Title: Data Engineer
Organization: Living Things Pvt. Ltd
Location: IIT Bombay, Powai, Mumbai
Job Type: Full-Time
Experience Level: Mid-Level (3 - 4 years experience)
About Us:
Living Things is a pioneering IoT platform by iCapotech Pvt Ltd, dedicated to accelerating the net zero journey towards a sustainable future. We bring mindfulness in energy usage by our platform. Our solution seamlessly integrates with existing air conditioners, empowering businesses & organisations to optimise & reduce energy usage, enhance operational efficiency, reduce carbon footprints, and drive sustainable practices. Analysis of Electricity consumption across all locations from Electricity Bills. By harnessing the power of real-time data analytics and intelligent insights, our energy saving algorithm helps in saving a minimum of 15% on Air Conditioner’s energy consumption.
About the Role:
We are seeking a highly skilled and motivated Data Engineer to join our growing data team. You will play a critical role in designing, building, and maintaining our data infrastructure, enabling data-driven decision-making across the organization.
Job Responsibilities:
- Manage and optimize relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases, including performance tuning and schema evolution management.
- Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analysis, with a focus on optimizing cost, performance, and scalability using cloud-native services.
- Design, build, and maintain robust, scalable, and fault-tolerant data pipelines using modern orchestration tools (Apache Airflow, Apache Flink, Dagster).
- Implement and manage real-time data streaming solutions (Apache Kafka, Kinesis, Pub/Sub).
- Knowledge of BI tools (Metabase, Power BI, Looker, QuickSight) and the ability to design data models that support efficient querying for analytical purposes.
- Collaborate closely with Data Scientists, Analysts, and Business stakeholders to understand data requirements and translate them into technical data solutions.
- Stay updated on the latest data engineering technologies and best practices, and advocate for their adoption where appropriate.
- Contribute to the development and improvement of data infrastructure and processes, including embracing DataOps principles for automation and collaboration.
- Work with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes) for deploying and managing data services.
- Implement data governance policies and practices, including data lineage and metadata management.
Skills and Qualifications:
Essential:
- Strong proficiency in Python, SQL, MongoDB.
- Experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB). Understanding of database internals, indexing, and query optimization.
- Knowledge of Data Modeling, Data Warehousing principles, and ETL/ELT methodologies.
- Proficiency with cloud platforms (AWS, Azure, GCP), including data storage (S3, ADLS Gen2, GCS), data warehousing services (e.g., Redshift, Snowflake, BigQuery), and managed services for data processing (AWS Glue, Azure Data Factory, Google Cloud Dataflow).
- Experience with data quality and validation techniques, and implementing automated data quality frameworks.
- Strong analytical and problem-solving abilities. Ability to troubleshoot complex data pipeline issues.
- Experience with BI tools (Metabase, Power BI, Looker, QuickSight) from a data provisioning perspective.
Preferred:
- Experience with Data Lake, Data Lakehouse, or Data Mesh architectures.
- Hands-on experience with data processing frameworks like Apache Spark, Apache Kafka, and stream processing technologies (Spark Streaming, Flink).
- Experience with workflow orchestration tools like Apache Airflow, Dagster.
- Understanding of DataOps and MLOps concepts and practices.
- Experience with data observability and monitoring tools.
- Excellent communication and presentation skills.
How to apply for this opportunity:
Easy 3-Step Process:
1. Click On Apply! And Register or log in on our portal
2. Upload updated Resume & Complete the Screening Form
3. Increase your chances to get shortlisted & meet the client for the Interview!
About Uplers:
Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career.
(Note: There are many more opportunities apart from this on the portal.)
So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!