Data Engineer
Job
Dow
Midland, MI (In Person)
Full-Time
Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
100
out of 100
Average of individual scores
Skill Insights
Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.
Job Description
At a glance
Expertise in translating complex business requirements into scalable analytical data models-such as star and snowflake schemas-and implementing SCD logic for downstream analytics. Cloud Orchestration & CI/CD (Azure Data Factory, Databricks Workflows, Azure DevOps/Git):
Skill in designing automated, reliable data pipelines, managing task dependencies, and implementing CI/CD deployment processes across environments. Data Integration Across Diverse Systems (SQL Server, CosmosDB, Neo4j):
Ability to connect to, query, and integrate data from relational and non-relational sources while optimizing persistence, ingestion, and query performance. Data Quality Engineering & Governance (Delta Live Tables, Great Expectations):
Applying validation frameworks, monitoring, and automated quality checks to ensure data reliability for ML, real-time analytics, and enterprise BI use cases.
Position:
Data Engineer Primary Location:
Midland (MI, USA),Michigan, United States of America Additional Locations:
Houston (TX, USA) Kankakee (IL, USA) + More -Less Schedule:
Full timeDate Posted:
04/22/2026Job Number:
R2066470
Position Type:
Regular Workplace Type:
Onsite At Dow, we believe in putting people first and we're passionate about delivering integrity, respect and safety to our customers, our employees and the planet. Our people are at the heart of our solutions. They reflect the communities we live in and the world where we do business. Their diversity is our strength. We're a community of relentless problem solvers that offers the daily opportunity to contribute with your perspective, transform industries and shape the future. Our purpose is simple - to deliver a sustainable future for the world through science and collaboration. If you're looking for a challenge and meaningful role, you're in the right place. About this role Dowhas an exciting opportunity for aData Engineerlocated inMidland, MI or Houston, TX or Champaign, IL (Dow Delivery Center at UIUC). This role will make significant technical contributions to critical data initiatives within our team at Dow. You will be responsible for driving the technical implementation and contributing to the design of scalable, Gold-layer data products on the Azure Databricks Lakehouse Platform. This role focuses on solving complex technical challenges, optimization, architecture contribution, and reliability, ensuring our datasets are performant and ready to power advanced use cases, including: Machine Learning (ML)Pipelines Real-Time Data Consumption Generative and Agentic AI Systems Core Enterprise Reporting and BI Data-driven Applications Responsibilities Technical Design Contribution:
Collaborate with senior data engineers to translate complex business requirements and ambiguous problem statements into clear, robust, and scalable technical designs and data models (e.g., dimensional modeling, star schemas), and independently drive the implementation of these designs.Performance Optimization:
Design, build, and deploy high-volume data transformation logic using highly optimized PySpark. You will apply advanced techniques to tune Spark jobs and diagnose performance bottlenecks to ensure maximum efficiency and minimal cloud compute cost.Architecture & Deployment:
Contribute significantly to the design and improvement of CI/CD pipelines in Azure DevOps/Git, ensuring reliable, automated, and secure deployment of data solutions across environments.Diverse Data Integration:
Deeply understand and connect to various source systems, demonstrating proficiency in managing data persistence and query performance across diverse technologies like SQL Server, Neo4j, and CosmosDB.Quality & Governance:
Proactively implement and maintain advanced data quality frameworks (e.g., Delta Live Tables, Great Expectations) and monitoring solutions to ensure data reliability for mission-critical applications.Collaboration & Mentorship:
Serve as a go-to technical resource for peers, conducting technical code reviews and informally mentoring Associate Data Engineers on PySpark and Databricks best practices. A successful candidate will possess the experience and technical depth required to independently implement and optimize complex data solutions: Core Technical Expertise (2-5 Years Demonstrated Experience)PySpark and Distributed Processing:
Proven ability to write highly optimized, production-grade PySpark/Spark code. Experience identifying and resolving performance bottlenecks in a distributed computing environment.Advanced Data Modeling:
Practical experience designing and implementing analytical data models (e.g., dimensional modeling, star/snowflake schemas) and handling Slowly Changing Dimensions (SCDs).Cloud Orchestration:
Expertise in using Azure Data Factory (ADF), Databricks Workflows, or equivalent tools (e.g., Airflow) for complex dependency management, error handling, and end-to-end pipeline orchestration.Database Versatility:
Demonstrated experience with advanced SQL and hands-on experience querying and integrating data from at least one non-relational or Graph database (e.g., CosmosDB, Neo4j).Engineering Mindset and Professional Growth Technical Design Contribution:
Ability to rapidly synthesize information and contribute clear, well-documented technical specifications and architectural diagrams to the design process.Feature Ownership:
Demonstrated history of taking ownership of complex features and modules within larger projects, driving them to completion, and managing technical dependencies autonomously.Pragmatism and Initiative:
A strong bias for action, coupled with a pragmatic approach to delivering stable, maintainable, and cost-effective solutions.Communication & Influence:
Excellent verbal and written communication skills, with the ability to articulate technical designs to both engineering peers and senior stakeholders, effectively influencing technical decisions. Required Qualifications A minimum of a bachelor's degreeorrelevant military experience at or above a U.S. E5 rankingorCanadian Petty Officer 2nd Class or Sergeant OR 5 years relevant experience in lieu of a Bachelor's degree. Minimum of 2 years of professional experience in Data Engineering, Software Engineering, or a closely related field. Minimum of 2 years ofhands-onexperience with Databricks Platform. A minimum requirement for this U.S. based position is the ability to work legally in the United States. No visa sponsorship/supportis available for this position, including for any type of U.S. permanent residency (green card) process. Preferred Skills Experience with cloud cost management principles related to compute (Databricks) and storage (ADLS). Experience with Infrastructure as Code (e.g., Terraform, ARM templates). Proficiency with data visualization and dashboarding tools (e.g., Power BI, Tableau). Your Skills PySpark /Distributed Data Processing:
The ability to build, optimize, and troubleshoot high-volume data transformation pipelines using PySpark, including tuning Spark jobs, resolving performance bottlenecks, and ensuring efficient distributed execution. Advanced Data Modeling (Dimensional / Star Schema Design):Expertise in translating complex business requirements into scalable analytical data models-such as star and snowflake schemas-and implementing SCD logic for downstream analytics. Cloud Orchestration & CI/CD (Azure Data Factory, Databricks Workflows, Azure DevOps/Git):
Skill in designing automated, reliable data pipelines, managing task dependencies, and implementing CI/CD deployment processes across environments. Data Integration Across Diverse Systems (SQL Server, CosmosDB, Neo4j):
Ability to connect to, query, and integrate data from relational and non-relational sources while optimizing persistence, ingestion, and query performance. Data Quality Engineering & Governance (Delta Live Tables, Great Expectations):
Applying validation frameworks, monitoring, and automated quality checks to ensure data reliability for ML, real-time analytics, and enterprise BI use cases.
Note:
relocation assistance is not provided with this role. ? Benefits - What Dow offers you We invest in you. Dow invests in total rewards programs to help you manage all aspects of you: your pay, your health, your life, your future, and your career.?You bring your background, talent, and perspective to work every day. Dow rewards that commitment by investing in your total wellbeing. Here are just a few highlights of what you would be offered as a Dow employee: Equitable and market-competitive base pay and bonus opportunity across our global markets, along with locally relevant incentives. Benefits and programs to support your physical, mental, financial, and social well-being, to help youSimilar remote jobs
International Foundation of Employee Benefit Plans
Brookfield, WI
Posted2 days ago
Updated1 day ago
Similar jobs in Midland, MI
Midland County Educational Service Agency
Midland, MI
Posted2 days ago
Updated1 day ago
Dow
Midland, MI
Posted2 days ago
Updated1 day ago
Midland County Educational Service Agency
Midland, MI
Posted2 days ago
Updated1 day ago
Similar jobs in Michigan
McLaren Health Care
Grand Blanc, MI
Posted2 days ago
Updated1 day ago
HRU Inc. Technical Resources
Sterling Heights, MI
Posted2 days ago
Updated1 day ago
Thrive Pet Healthcare
Livonia, MI
Posted2 days ago
Updated1 day ago