Overview
About the Company
Impact Analytics™ (Series D Funded) delivers AI-native SaaS solutions and consulting services that help companies maximize profitability and customer satisfaction through deeper data insights and predictive analytics. With a fully integrated, end-to-end platform for planning, forecasting, merchandising, pricing, and promotions, Impact Analytics empowers companies to make smarter decisions based on real-time insights rather than relying on last year’s inputs to forecast and plan this year’s business. Powered by over one million machine learning models, Impact Analytics has been leading AI innovation for a decade, setting new benchmarks in forecasting, planning, and operational excellence across the retail, grocery, manufacturing, and CPG sectors. In 2025, Impact Analytics is at the forefront of the Agentic AI revolution, delivering autonomous solutions that enable businesses to adapt in real time, optimize operations, and drive profitability without manual intervention. Here’s a link to our website: www.impactanalytics.co.
Some of our accolades include:
- Ranked as one of America's Fastest-Growing Companies by Financial Times for five consecutive years: 2020-2024.
- Ranked as one of America's Fastest-Growing Private Companies by Inc. 5000 for seven consecutive years: 2018-2024.
- Voted #1 by more than 300 retailers worldwide in the RIS Software LeaderBoard 2024 report.
- Ranked #72 in America’s Most Innovative Companies list in 2023—by Fortune—alongside companies like Microsoft, Tesla, Apple, IBM, etc.
- Forged a strategic partnership with Google to equip retailers with cutting-edge generative AI tools.
- Recognized in multiple Gartner reports, including Market Guides and Hype Cycle, spanning assortments, merchandising, forecasting, algorithmic retailing, and Unified Price, Promotion, and Markdown Optimization Applications.
About the Role
We are looking for a Data Engineer to join our data team to solve data-driven critical business problems. The person in this role will be responsible for expanding and optimizing the existing end-to-end architecture including the data pipeline architecture. The Data Engineer will collaborate with software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture (consistency throughout ongoing projects). The right candidate should have hands-on experience in developing a hybrid set of data-pipelines depending on the business requirements.
Responsibilities
- Develop, construct, test and maintain existing and new data-driven architectures.
- Align architecture with business requirements and provide solutions which fits best to solve the business problems.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS - big data technologies.
- Data acquisition from multiple sources across the organization.
- Use programming language and tools efficiently to collate the data.
- Identify ways to improve data reliability, efficiency, and quality.
- Use data to discover tasks that can be automated.
- Deliver updates to stakeholders based on analytics.
- Set up practices on data reporting and continuous monitoring.
Qualifications
- Graduate or Postgraduate in Computer Science or in similar quantitative area (B. Tech/B.E. or M. Tech/M.E. in Computers, IT).
- 1+ years of relevant work experience as a Data Engineer or in a similar role.
Required Skills
- Hands-on experience on different databases, python tools and ability to solve complex business problems using data.
- Advanced SQL knowledge, Data-Modelling and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience in developing and optimizing ETL pipelines, big data data pipelines, and data-driven architectures.
- Strong knowledge in programming using Python Shell.
- Strong analytical skills related to working with different types of datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong knowledge of working with different OS Linux, Windows, etc.
Preferred Skills
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with GCP cloud services: GCS, Big query, VMs.
- Experience with object-oriented/object function scripting languages: Python.