Pune, Maharashtra, India
Finance & Banking
Full-Time
Tech Mahindra
Overview
Job Summary
Position Overview: A&DT, UMO for Hygiene and Risk Squad to mitigate risk and abide to compliance policies and Secure the firm application architecture, design and platform from unidentified threats. Education: Bachelor¿s/Master¿s Degree in Engineering, preferably Computer Science/Engineering Job Location: Mumbai/Bangalore Qualifications ¿ Primary Skills / Must have 3 5 years of experience with the technical analysis and design, development and implementation of data warehousing / Data Lake solutions; Well versed and Hand on in Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix Additional Skills Good understanding/experience in Hive/Impala/Spark; Strong SQL programming and stored procedure development skills Strong UNIX Shell scripting experience to support data warehousing solutions; Process oriented, focused on standardization, streamlining, and implementation of best practices delivery approach; Excellent problem solving and analytical skills; Excellent verbal and written communication skills; Experience in optimizing large data loads; Secondary Skills / Desired skills Should be a good Team player; Experience in OLAP/Relational databases would be good. Exposure to an Agile Development environment would be a plus. Knowledge about TWS Scheduler would an added advantage; Strong understanding of Data warehousing domain; Good understanding of dimensional modelling; Roles and Responsibilities As an Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix developer, the candidate is expected to Design and development of Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix. Undertaking end to end project delivery (from inception to post implementation support), including review and finalization of business requirements, creation of functional specifications and/or system designs, and ensuring that end solution meets business needs and expectations. Development of new transformation processes to load data from source to target, or performance tuning of existing Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix code (mappings, sessions). Analysis of existing designs and interfaces and applying design modifications or enhancements Coding and documenting data processing scripts and stored procedures. Providing business insights and analysis findings for ad hoc data requests Testing software components and complete solutions (including debugging and troubleshooting) and preparing migration documentation. Providing reporting line transparency through periodic updates on project or task status
Position Overview: A&DT, UMO for Hygiene and Risk Squad to mitigate risk and abide to compliance policies and Secure the firm application architecture, design and platform from unidentified threats. Education: Bachelor¿s/Master¿s Degree in Engineering, preferably Computer Science/Engineering Job Location: Mumbai/Bangalore Qualifications ¿ Primary Skills / Must have 3 5 years of experience with the technical analysis and design, development and implementation of data warehousing / Data Lake solutions; Well versed and Hand on in Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix Additional Skills Good understanding/experience in Hive/Impala/Spark; Strong SQL programming and stored procedure development skills Strong UNIX Shell scripting experience to support data warehousing solutions; Process oriented, focused on standardization, streamlining, and implementation of best practices delivery approach; Excellent problem solving and analytical skills; Excellent verbal and written communication skills; Experience in optimizing large data loads; Secondary Skills / Desired skills Should be a good Team player; Experience in OLAP/Relational databases would be good. Exposure to an Agile Development environment would be a plus. Knowledge about TWS Scheduler would an added advantage; Strong understanding of Data warehousing domain; Good understanding of dimensional modelling; Roles and Responsibilities As an Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix developer, the candidate is expected to Design and development of Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix. Undertaking end to end project delivery (from inception to post implementation support), including review and finalization of business requirements, creation of functional specifications and/or system designs, and ensuring that end solution meets business needs and expectations. Development of new transformation processes to load data from source to target, or performance tuning of existing Azure pipelines, Snowflake, Azure Databricks, Hive, Pyspark and Unix code (mappings, sessions). Analysis of existing designs and interfaces and applying design modifications or enhancements Coding and documenting data processing scripts and stored procedures. Providing business insights and analysis findings for ad hoc data requests Testing software components and complete solutions (including debugging and troubleshooting) and preparing migration documentation. Providing reporting line transparency through periodic updates on project or task status
Similar Jobs
View All
Talk to us
Feel free to call, email, or hit us up on our social media accounts.
Email
info@antaltechjobs.in