Overview
Job Title: Senior Software Engineer – Databricks PlatformLocation:
Hyderabad, India
Experience: 6–10 Years
Company: Anblicks
About Anblicks
Anblicks is a cloud data engineering and analytics company helping enterprises modernize their data platforms through Cloud, Data Engineering, AI/ML, and Advanced Analytics solutions. We specialize in building scalable, secure, and high-performance data ecosystems that power enterprise decision-making.
We are seeking a highly skilled Senior Software Engineer – Databricks Platform to design and build enterprise-grade, production-ready data applications on Databricks. This is a hands-on technical role requiring deep platform expertise, ownership mindset, and strong collaboration skills.
Role Overview
This role focuses on designing, developing, and operating scalable data-intensive systems on the Databricks Lakehouse Platform. You will work closely with Architects, Data Engineers, Analytics teams, and client stakeholders to deliver secure, high-performance, and cost-optimized solutions.
This is ideal for senior engineers who can handle ambiguous requirements, architect end-to-end solutions, and take ownership of production systems.
Key Responsibilities
- Databricks Platform Engineering
- Design, develop, and maintain production-grade applications and data pipelines on Databricks.
- Build and optimize stored procedures, notebooks, and UDFs using Java and Python.
- Write efficient, well-structured SQL for analytics, transformations, and reporting.
- Develop scalable Spark-based processing frameworks for batch and streaming workloads.
- Leverage Delta Lake for transactional reliability and performance optimization.
- Data Governance & Secure Sharing
- Implement and manage data governance using Unity Catalog, including permissions, lineage, and data discovery.
- Design and support Delta Sharing solutions for secure internal and external data collaboration.
- Build and enhance capabilities in Databricks Clean Room environments.
- Ensure compliance with enterprise data security and governance standards.
- Performance & Cost Optimization
- Leverage Databricks Serverless capabilities to optimize cost, scalability, and performance.
- Tune Spark jobs and optimize query performance for large-scale data processing.
- Implement monitoring, logging, and observability best practices.
- Architecture & Ownership
- Translate business requirements into scalable technical designs.
- Own features and systems end-to-end (design, implementation, testing, deployment, and operational support).
- Participate in architecture reviews and engineering best practices.
- Improve reliability, scalability, and resilience of data systems.
- Collaboration & Leadership
- Collaborate with cross-functional teams including Data Engineering, Analytics, DevOps, and client stakeholders.
- Mentor junior engineers and provide technical guidance.
- Contribute to coding standards, documentation, and reusable frameworks.
- 6–10 years of professional software engineering experience.
- 3+ years of hands-on Databricks experience in production environments.
- Strong proficiency in Java, Python, and SQL.
- Hands-on experience with:
- Apache Spark
- Delta Lake
- Unity Catalog (governance, permissions, lineage)
- Delta Sharing
- Databricks Serverless
- Strong understanding of distributed systems and large-scale data processing.
- Experience designing and optimizing enterprise-grade data pipelines.
- Strong communication skills with the ability to explain technical concepts to both technical and business stakeholders.
- Demonstrated ownership mindset and ability to drive work independently.
- Experience with cloud platforms (AWS, Azure, or GCP) in a Databricks environment.
- Familiarity with CI/CD pipelines for data and application workflows.
- Experience implementing data governance, security, and compliance controls.
- Experience supporting multi-tenant or client-facing data platforms.
- Exposure to cost optimization strategies for large-scale data workloads.
- Databricks certification (preferred).
- Experience leading enterprise data modernization initiatives.
- Strong architectural thinking with practical implementation skills.
- Experience building reusable frameworks and platform capabilities.
- Ability to operate in high-visibility, client-facing engagements.
- Passion for building scalable, secure, and performant data systems.