Free cookie consent management tool by TermsFeed Data Research - Database Engineer | Antal Tech Jobs
Back to Jobs
4 Weeks ago

Data Research - Database Engineer

decor
Bangalore, Karnataka, India
Information Technology
Full-Time
Uplers

Overview

Experience: 5.00 + years

Salary: Confidential (based on experience)

Shift: (GMT+05:30) Asia/Kolkata (IST)

Opportunity Type: Remote

Placement Type: Full time Permanent Position

(*Note: This is a requirement for one of Uplers' client - Forbes Advisor)

What do you need for this opportunity?

Must have skills required:

Python, postgresql, Snowflake, AWS RDS, BigQuery, OOPs, Monitoring tools, Prometheus, ETL tools, Data warehouse, Pandas, Pyspark, AWS Lambda

Forbes Advisor is Looking for:

Job Description:

Data Research - Database Engineer

Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most.

Position Overview

At Marketplace, our mission is to help readers turn their aspirations into reality. We arm people with trusted advice and guidance, so they can make informed decisions they feel confident in and get back to doing the things they care about most.

We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams, including Content, SEO, Business Intelligence, Finance, HR, Marketing, Production, Technology and Sales. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel.

The Data Research Engineering Team is a brand new team with the purpose of managing data from acquisition to presentation, collaborating with other teams while also operating independently. Their responsibilities include acquiring and integrating data, processing and transforming it, managing databases, ensuring data quality, visualizing data, automating processes, working with relevant technologies, and ensuring data governance and compliance. They play a crucial role in enabling data-driven decision-making and meeting the organization's data needs.

A typical day in the life of a Database Engineer/Developer will involve designing, developing, and maintaining a robust and secure database infrastructure to efficiently manage company data. They collaborate with cross-functional teams to understand data requirements and migrate data from spreadsheets or other sources to relational databases or cloud-based solutions like Google BigQuery and AWS. They develop import workflows and scripts to automate data import processes, optimize database performance, ensure data integrity, and implement data security measures. Their creativity in problem-solving and continuous learning mindset contribute to improving data engineering processes. Proficiency in SQL, database design principles, and familiarity with Python programming are key qualifications for this role.

Responsibilities:

  • Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely.
  • Work with databases of varying scales, including small-scale databases, and databases involving big data processing.
  • Work on data security and compliance, by implementing access controls, encryption, and compliance standards.
  • Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture.
  • Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery.
  • Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency.
  • Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms.
  • Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity.
  • Monitor database health and identify and resolve issues.
  • Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms.
  • Implement data security measures to protect sensitive information and comply with relevant regulations.
  • Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows.
  • Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices.
  • Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines.
  • Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis.
  • Use Python for tasks such as data manipulation, automation, and scripting.
  • Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines.
  • Assume accountability for achieving development milestones.
  • Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities.
  • Collaborate with and assist fellow members of the Data Research Engineering Team as required.
  • Perform tasks with precision and build reliable systems.
  • Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations.


Skills And Experience

  • Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential.
  • Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals.
  • Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes.
  • Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting.
  • Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions).
  • Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration.
  • Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL.
  • Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark.
  • Knowledge of SQL and understanding of database design principles, normalization, and indexing.
  • Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources.
  • Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery.
  • Eagerness to develop import workflows and scripts to automate data import processes.
  • Knowledge of data security best practices, including access controls, encryption, and compliance standards.
  • Strong problem-solving and analytical skills with attention to detail.
  • Creative and critical thinking.
  • Strong willingness to learn and expand knowledge in data engineering.
  • Familiarity with Agile development methodologies is a plus.
  • Experience with version control systems, such as Git, for collaborative development.
  • Ability to thrive in a fast-paced environment with rapidly changing priorities.
  • Ability to work collaboratively in a team environment.
  • Good and effective communication skills.
  • Comfortable with autonomy and ability to work independently.


Perks:

  • Day off on the 3rd Friday of every month (one long weekend each month)
  • Monthly Wellness Reimbursement Program to promote health well-being
  • Monthly Office Commutation Reimbursement Program
  • Paid paternity and maternity leaves


How to apply for this opportunity?

  • Step 1: Click On Apply! And Register or Login on our portal.
  • Step 2: Complete the Screening Form & Upload updated Resume
  • Step 3: Increase your chances to get shortlisted & meet the client for the Interview!


About Uplers:

Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement.

(Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well).

So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Share job
Similar Jobs
View All
1 Day ago
Senior Software Engineer II
Information Technology
  • Gurugram, Haryana, India
Would you like to be part of a team that delivers high-quality software to our customers?Are you a highly visible champion with a ‘can do’ attitude and enthusiasm that inspires others?About Our TeamOur team consists of software engineers, data scien...
decor
1 Day ago
Software Engineer - L3 Support
Information Technology
  • Gurugram, Haryana, India
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, e...
decor
1 Day ago
Software Engineer III
Information Technology
  • Gurugram, Haryana, India
Are you ready for the next step in your engineering career?Would you enjoy working on our cutting-edge products?About The TeamThe Product Information Manager team oversees the organization's Product Information Management system, ensuring accurate, ...
decor
1 Day ago
Reports and Analytics Software Engineer
Information Technology
  • Gurugram, Haryana, India
We are seeking a highly skilled and hands-on Reports and Analytics Software Engineer to join our growing data team. This role is crucial in transforming raw data into actionable insights through the design, development, and maintenance of robust rep...
decor
1 Day ago
Software Engineer - L3 Support
Information Technology
  • Gurugram, Haryana, India
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, e...
decor
1 Day ago
Manager_ Business Analyst
Information Technology
  • Chennai, Tamil Nadu, India
Key Responsibilities Deeply understand user journeys to generate data-driven insights and actionable recommendations for product and customer success teams. Develop hypotheses, design and run A/B experiments, and identify high-confidence areas of op...
decor
1 Day ago
React Developer Systems Engineer
Information Technology
  • Chennai, Tamil Nadu, India
About AutozoneAutoZone is the nation's leading retailer and a leading distributor of automotive replacement parts and accessories with more than 6,000 stores in US, Puerto Rico, Mexico, and Brazil. Each store carries an extensive line for cars, spor...
decor
1 Day ago
Conversational AI Engineer
Information Technology
  • Gurugram, Haryana, India
Roboyo is not a typical technology consultancy. We have been at the forefront of Business Process Automation since the sector began, less than a decade ago.We started as pioneers. Today, we are the world’s largest specialist Intelligent Automation c...
decor

Talk to us

Feel free to call, email, or hit us up on our social media accounts.
Social media