Tallo logoTallo logo

Machine Learning Engineering Senior Engineer

Job

Epitec

Dearborn, MI (In Person)

$145,600 Salary, Full-Time

Posted 4 days ago (Updated 14 hours ago) • Actively hiring

Expires 6/13/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
98
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Job Title:
Senior Machine Learning Engineer (W2 Contract, NO C2C)
Location:
Dearborn, MI. (Must be local)
Job Type:
Engineering Expected hours per week: 40 hours per week
Schedule:
Onsite Pay Range:
$70+ an hour
Job Description:
Position Description We are seeking an ML Ops / Data Platform Engineer to build and maintain scalable machine learning and data platforms supporting connected vehicle and agentic AI initiatives. This role focuses on designing robust cloud-based data pipelines, optimizing ML solutions, and enabling secure, reliable, and cost?effective production systems on Google Cloud Platform (GCP). You will work closely with data scientists, analytics stakeholders, and product teams to deliver high?quality data and ML solutions across streaming and batch pipelines while promoting best practices in data governance, DevOps, and software quality. Key Responsibilities ML Ops & Data Engineering Build scalable ML data pipelines in the cloud to process large volumes of connected vehicle data Optimize ML solutions for performance, security, reliability, and cost Support continual learning approaches to improve production model performance Develop analytical data products using streaming and batch ingestion patterns on GCP Monitor data quality and ML model performance across pipelines and platforms Platform & Infrastructure Maintain and enhance data platform infrastructure using Terraform Design and maintain CI/CD pipelines for data and ML workloads Enhance DevOps capabilities across the data platform Monitor production pipelines and provide operational support according to SLAs Architecture, Governance & Quality Implement and promote enterprise data governance models, including data protection, quality, lineage, and standards Perform data mapping, lineage documentation, and information flow analysis Address code quality and security findings using tools such as SonarQube, Checkmarx, Fossa, and Cycode Continuously optimize existing pipelines, platforms, and infrastructure Collaboration & Communication Collaborate with analytics, data science, and business stakeholders to streamline data acquisition and delivery Provide analysis of connected vehicle data to support new product development and vehicle improvements Communicate complex technical concepts clearly to both technical and non?technical audiences Work in an Agile product team using TDD, CI, and CD practices Required Skills Strong technical communication and stakeholder collaboration skills Machine Learning and ML Ops experience Google Cloud Platform (GCP) - deep hands-on experience required Python, SQL, and Java Data engineering and data architecture Streaming and batch data pipelines Apache Kafka or GCP Pub/Sub Spark REST APIs and microservices CI/CD, GitHub, Docker, Tekton Agile software development (Scrum) Data governance concepts and implementation Preferred Skills TensorFlow Telematics or connected vehicle data Data modeling, data mining, and database design Cloud infrastructure architecture Troubleshooting and problem solving Experience mentoring junior engineers Experience Requirements PhD 4+ years in data engineering, data products, or software product development Experience with at least three of the following: Java, Python, Spark/Scala, SQL 3+ years building production batch and streaming pipelines using: BigQuery, Redshift, or Azure Synapse Airflow or similar orchestration tools Relational databases (PostgreSQL, MySQL, SQL Server) Kafka or Pub/Sub Microservices architectures Terraform, GitHub Actions, Docker Jira or similar project management tools Nice to Have ML model development or ML Ops experience GCP certifications Experience with cloud migrations or platform modernization Open-source contributions Automotive or connected services domain experience Passion for modern data engineering and ML platform design
Benefits:
80 hours paid time off, medical insurance contributions, dental vision and our 401k retirement savings plan #LI-SH1

Similar remote jobs

Similar jobs in Dearborn, MI

Similar jobs in Michigan