
Overview
Key Responsibilities 1. Comprehensive Data Source Management • Take complete ownership of identifying, auditing, and managing all existing and new data sources. • Monitor and ensure timely ingestion of data from all subscribed providers. • Evaluate subscription utility to optimize costs (e.g., Refinitiv, eSignal, KRX). • Conduct regular audits to reduce spend on underutilized or obsolete sources. 2. Collaboration with Quantitative Research • Work closely with quants to understand and validate the application of various datasets in research and model development. • Help prioritize datasets based on business value, coverage, and integration feasibility. 3. Integration with the Tech Team • Partner with software engineers to ensure seamless and efficient ingestion pipelines. • Standardize data ingestion and storage practices for scalability and query performance. • Contribute to defining and enforcing data engineering best practices. 4. Data Quality Assurance • Continuously monitor data quality to detect errors, inconsistencies, or gaps—especially at the granular level. • Establish robust processes for anomaly detection and timely resolution. • Iterate on validation tools and testing frameworks to improve data reliability. 5. Data Vendor Management & Pipeline Expansion • Proactively source and evaluate new external data vendors and offerings (e.g., factor models, fundamental and sentiment data). • Build and maintain a pipeline of potential and exploratory data sources. • Stay abreast of industry trends and novel data types relevant to finance and trading. 6. Unified Data Access & Tooling • Develop and maintain a unified RPC-based data access library in Python, compatible with all key systems (e.g., Balte, Slippage engine). • Ensure consistent and performant access to all ingested datasets. 7. Proprietary Data & Signal Development (Long-Term) • Collaborate with research and engineering teams to define and compute proprietary indicators (e.g., custom sentiment scores). • Leverage raw and alternative data sources to build unique signals that provide a competitive edge. Qualifications • Bachelor's or Master's degree in Computer Science from TIER one Colleges, Engineering, Data Science, or a related field. • 1-3 years of experience in data engineering or a similar role, preferably in a financial or quantitative research environment. • Strong Python skills with experience in building APIs and reusable libraries. • Proficient in SQL and working with large-scale data storage solutions (e.g., Snowflake, Redshift, or similar). • Experience with distributed data processing frameworks (e.g., Spark, Dask) is a plus. • Familiarity with market data vendors (e.g., Refinitiv, Bloomberg, eSignal) and financial data structures is highly desirable. • Proven ability to manage external vendor relationships and assess data utility. • Excellent communication and collaboration skills to work effectively across research and tech teams.
Job Type: Full-time
Pay: ₹2,000,000.00 - ₹2,500,000.00 per year
Schedule:
- Day shift
Experience:
- Spark: 1 year (Preferred)
- Dask: 1 year (Preferred)
- Refinitiv: 1 year (Preferred)
- Bloomberg: 1 year (Preferred)
- eSignal: 1 year (Preferred)
- financial data structures: 1 year (Preferred)
- Python: 1 year (Preferred)
- building APIs: 1 year (Preferred)
- reusable libraries: 1 year (Preferred)
- Snowflake: 1 year (Preferred)
- Redshift: 1 year (Preferred)
Work Location: In person