Sr. Python Data Engineer
Job
Robert Half
Remote
Full-Time
Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
83
out of 100
Average of individual scores
Skill Insights
Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.
Job Description
Data Engineer (Python / AWS)
This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We're not looking for someone who is aiming to move immediately into architecture or leadership.
This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.
What You'll DoBuild and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.
Develop Python‑based data processing workflows deployed on AWS cloud services.
Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.
Help modernize existing workflows and assist in the gradual migration away from a legacy data system.
Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.
Troubleshoot pipeline issues, optimize performance, and improve overall system stability.
Contribute to best practices around code quality, testing, documentation, and data governance.
Location:
Remote (Northeast / Greater Boston area preferred)Type:
Full-TimeLevel:
Mid-to-Senior Individual ContributorAbout the RoleWe are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You'll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We're not looking for someone who is aiming to move immediately into architecture or leadership.
This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.
What You'll DoBuild and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.
Develop Python‑based data processing workflows deployed on AWS cloud services.
Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.
Help modernize existing workflows and assist in the gradual migration away from a legacy data system.
Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.
Troubleshoot pipeline issues, optimize performance, and improve overall system stability.
Contribute to best practices around code quality, testing, documentation, and data governance.
Similar remote jobs
Wells Fargo
Chandler, AZ
Posted2 days ago
Updated17 hours ago
Merck Sharp Dohme
Des Moines, IA
Posted2 days ago
Updated17 hours ago
Similar jobs in Boston, MA
The Hertz Corporation
Boston, MA
Posted2 days ago
Updated17 hours ago