
Overview
City:
Address: Bengaluru-Whitefield, Karnataka 560037, IN
About Company:
Sigmoid is a leading data engineering services and AI consulting company that helps enterprises gain a competitive advantage with effective data-driven decision-making. Our team is strongly driven by the passion to unravel data complexities. We generate actionable insights and translate them into successful business strategies.
We leverage our expertise in open-source and cloud technologies to develop innovative frameworks catering to specific customer needs. Our unique approach has positively influenced the business performance of our Fortune 500 clients across CPG, retail, banking, financial services, manufacturing, and other verticals.
Backed by Sequoia Capital, Sigmoid has offices in New York, San Francisco, Dallas, Lima, Amsterdam, and Bengaluru. We are recognized among the world's fastest growing and innovative tech companies, winning several awards and recognition like the Deloitte Technology Fast 500, Financial Times - The Americas’ Fastest Growing Companies, Inc. 5000, Great Place To Work- India’s Best Leaders in Times of Crisis, Data Breakthrough, Aegis Graham Bell, TiE50, NASSCOM Emerge 50, and others.
Job Description:
Your mission, roles and requirements:
We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. You will be responsible for designing, implementing, and maintaining data pipelines to support our organization's data infrastructure and analytics initiatives. You will collaborate closely with data scientists, analysts, and other stakeholders to ensure the seamless flow of data across various systems and platforms.
More specifically, you will responsible for:
• Proficiency in Python programming language for data manipulation and pipeline development.
• Strong understanding of SQL and relational databases for data querying and manipulation.
• Experience with version control systems like Git for code management and collaboration.
• Working knowledge on least one of the cloud platforms (AWS, GCP, Azure) is required.
Desired Skills & Competencies:
- Design, develop, and maintain robust data pipelines and architectures within the AWS ecosystem.
- Collaborate with cross-functional teams to gather requirements and deliver effective data solutions.
- Implement and manage AWS services to ensure optimal performance and security of data processes.
- Perform data integration, transformation, and cleansing tasks using PySpark and Python.
- Monitor and troubleshoot data processes, ensuring high availability and performance.
- Maintain documentation of data processes and solutions for compliance and knowledge sharing.
- Actively participate in Agile ceremonies and contribute to continuous improvement initiatives.
Education & Desired Experience:
• Bachelor's degree in Computer Science, Engineering, or a related field.
• 4 to 7 years of experience in data engineering or a related role.
• Strong problem-solving skills and attention to detail.
• Excellent communication and collaboration skills.