Tallo logoTallo logo

Senior Data Fabric Engineer

Job

Digital Minds Global Technologies Inc.

San Jose, CA (In Person)

Full-Time

Posted 4 days ago (Updated 1 day ago) • Actively hiring

Expires 6/13/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
78
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Role Overview We are seeking a highly skilled Senior Data Engineer with a deep specialization in Apache Spark and the Microsoft Fabric ecosystem. The ideal candidate isn''t just a notebook user; you are an expert in developing, packaging, and deploying Spark-based JAR files (Scala/Java) to handle complex, high-scale data processing requirements. You will be responsible for architecting robust ETL/ELT pipelines, optimizing Spark performance, and leveraging the full suite of Microsoft Fabric tools (OneLake, Lakehouse, and Data Factory) to drive our data strategy forward.
Key Responsibilities Custom Spark Development:
Design and develop high-performance data processing applications using Scala or Java, compiled into JARs for execution on Spark clusters.
Fabric Implementation:
Architect and maintain end-to-end data solutions within Microsoft Fabric, utilizing Synapse Data Engineering and OneLake.
Performance Tuning:
Optimize Spark jobs by managing partitions, caching strategies, and memory management to ensure efficient resource utilization.
Pipeline Orchestration:
Build and automate sophisticated data workflows using Fabric Data Factory and Airflow (or similar).
DevOps & CI/CD:
Manage the lifecycle of Spark JARs through automated CI/CD pipelines (Azure DevOps/GitHub Actions), ensuring seamless deployment to Fabric environments.
Data Modeling:
Implement Medallion Architecture (Bronze/Silver/Gold) and maintain Delta Lake tables for ACID compliance and time travel capabilities.
Technical Requirements Core Essentials Spark Expertise:
5+ years of experience with Apache Spark, specifically in writing and deploying compiled JAR files rather than solely relying on PySpark notebooks.
Languages:
Proficiency in Scala or Java (required) and Python/SQL (preferred).
Platform:
Hands-on experience with Microsoft Fabric or migrations from Azure Databricks/Synapse to Fabric.
Storage:
Deep understanding of Delta Lake format and Parquet optimization.

Similar remote jobs

Similar jobs in San Jose, CA

Similar jobs in California