Tallo logoTallo logo

Senior Data Engineer (Azure & Databricks)

Job

Emergent Staffing

Remote

Full-Time

Posted 3 days ago (Updated 11 hours ago) • Actively hiring

Expires 6/8/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
81
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Senior Data Engineer (Azure & Databricks) Emergent Staffing - 4.8 Minneapolis, MN Job Details Contract 10 hours ago Qualifications Version control Stakeholder engagement Databricks Data Integration (Data management) Software deployment Automating deployment processes SQL Azure DevOps proficiency Azure Data Factory Stakeholder management Database software proficiency Full Job Description •This is a 6+ month contract with our client based out of Bloomington, MN. This is a hybrid role, working in office Tuesday/Thursday. Candidates must be able to work in the US without sponsorship.•We're looking for a Senior Data Engineer with strong Azure experience, especially in Azure Databricks, Delta Lake, and SQL, to build and scale a medallion-based data platform. This role focuses on designing high‑performance, governed data pipelines using PySpark, SQL, and Databricks tools to integrate data from Azure systems, SQL Server Managed Instance, and third‑party sources, while partnering closely with analytics teams and business stakeholders. Experience or strong interest in supporting AI/ML use cases is highly valued, with financial‑services experience considered a plus but not required. Responsibilities Design, develop, and optimize data pipelines in Azure Databricks using PySpark and SQL, applying Delta Lake and Unity Catalog best practices. Build modular, reusable libraries and utilities within Databricks to accelerate development and standardize workflows. Implement Medallion architecture (Bronze, Silver, Gold layers) for scalable, governed data zones. Integrate external data sources via REST APIs, SFTP file delivery, and SQL Server Managed Instance, implementing validation, logging, and schema enforcement. Utilize parameter-driven jobs and manage compute using Spark clusters and Databricks serverless. Collaborate with data analytics teams and business stakeholders to understand requirements and deliver analytics-ready datasets. Monitor and troubleshoot Azure Data Factory (ADF) pipelines (jobs, triggers, activities, data flows) to identify and resolve job failures and data issues. Automate deployments and manage code using Azure DevOps for CI/CD, version control, and environment management. Contribute to documentation, architectural design, and continuous improvement of data engineering best practices. Support the design and readiness of the data platform for AI and machine learning initiatives. Requirements Strong expertise with Azure Databricks, including PySpark, Delta Lake, Unity Catalog, and the ability to build reusable libraries, utility notebooks, and parameterized jobs. Advanced SQL skills with experience working in Azure SQL Database and/or SQL Server Managed Instance. Experience designing, troubleshooting, and supporting data pipelines using Azure Data Factory. Proven ability to integrate external data sources, including REST APIs and SFTP. Working knowledge of Azure DevOps for CI/CD, version control, and parameterized deployments. Demonstrated experience partnering closely with data analytics teams and business stakeholders, supported by strong communication, problem-solving, and collaboration skills. Interest or experience in preparing data platforms to support AI and machine learning initiatives. Nice to Haves Experience implementing Medallion architecture within governed Azure data environments, including data governance and RBAC. Familiarity with data warehousing concepts, dimensional modeling, and preparing datasets for BI tools such as Power BI. Understanding of Spark performance optimization, cluster or serverless compute management, and advanced Delta Lake features. Hands-on experience preparing datasets to support AI/ML use cases. Prior experience in the financial-services industry. Our Vetting Process At Emergent Staffing, we work hard to find Data Engineers who are the right fit for our clients. Here are the steps of our vetting process for this position: Application (5 minutes) Online Assessment (40 minutes) Initial Phone Interview (30-45 minutes) Virtual Interview with Hiring team Onsite Interview Job Offer! #EmergentStaffing #IND3 null

Similar remote jobs

Similar jobs in Minneapolis, MN

Similar jobs in Minnesota