Tallo logoTallo logo

Sr Data Engineer

Job

Fast Switch, Ltd.

Remote

$130,000 Salary, Full-Time

Posted 1 week ago (Updated 1 week ago) • Actively hiring

Expires 5/30/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
81
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Sr Data Engineer
Location:
Windsor, Connecticut Remote:
Remote Type:
Contract Job #61678
Salary:
$60.00 - $65.00 Per Hour Senior Data Engineer Target rate: $65 hr w2
Contract Length:
6 months
Location:
Remote EST coast hours however local candidates strongly encouraged •Candidates must work on our W2 without needing sponsorship at any time now or in the future •We do not work with Corp to Corp in any manner including any form of referral bonus. If after reading the description you would like to be submitted, please complete the questions below. Answer in implied first person and in a stand-alone sentence. These will go along with the resume to the client. How many years of senior-level data engineering experience do you have designing, building, and scaling batch and real-time data pipelines, and in what environments? What hands-on experience do you have using SQL and Python to develop, optimize, and debug scalable data pipelines, and what types of data volume or pipeline complexity have you supported? What production experience do you have with Azure and Databricks, including Delta Live Tables and Unity Catalog, and what kinds of solutions did you build with them? Which orchestration and integration tools have you used, such as SnapLogic, Azure Data Factory, and Jenkins, and what pipeline automation or scheduling work did you lead? What experience do you have using Terraform for infrastructure as code and deployment pipeline management, and what specifically did you provision, maintain, or automate? What experience do you have with data quality, observability, or monitoring tools such as Soda, and how did you define, enforce, or improve data quality standards or SLAs?
JOB DESCRIPTION
Profile Summary:
The Senior Data Engineer leads the design and implementation of robust data solutions across multiple domains, driving technical excellence and scalability. This role mentors others, shapes best practices, and influences data architecture. This person is expected to proactively identify opportunities to improve systems, drive reliability, and collaborate with product and business stakeholders to align data strategy with company goals.
Profile Description:
Design, build, and scale robust, high-performing batch and real-time data pipelines. Drive architectural decisions for transformation logic, storage formats, and schema design. Lead complex data ingestion efforts and mentor peers on performance optimization and scalability. Lead the design and optimization of complex data models and storage architecture, balancing performance, scalability, and usability. Partner with stakeholders to translate business requirements into robust data structures. Contribute significantly to delivery planning and execution, mentor junior engineers on agile approaches, and ensure timely completion of tasks by managing dependencies and escalating delivery challenges. Design and standardize advanced data validation frameworks and testing strategies across platforms. Lead root cause analysis for data quality issues and mentor others on quality best practices. Partner with stakeholders to define SLAs and quality metrics. Lead efforts to automate, monitor, and scale deployment of production-grade data pipelines. Design resilient workflows with retry logic, failure handling, and resource optimization. Proactively address performance and reliability issues and contribute to runbooks and on-call documentation. Lead the creation and maintenance of detailed technical documentation for complex pipelines, data models, and system integrations. Establish and enforce documentation and development standards across projects. Mentor junior engineers on clear, consistent coding and documentation habits. Act as a key technical partner to product, analytics, and data science teams. Lead design discussions, communicate complex data trade-offs clearly, and proactively surface risks and blockers. Support collaborative planning and mentor junior team members in effective communication and partnership.
Knowledge & Experience:
5-9 years of experience in data engineering, data modeling, and pipeline development. Expert-level SQL and Python skills for developing and debugging scalable data pipelines. Deep hands-on experience with Azure and Databricks, including Delta Live Tables and Unity Catalog. Skilled with data integration and orchestration tools such as SnapLogic, Azure Data Factory, and Jenkins. Strong experience using infrastructure-as-code tools such as Terraform to manage deployment pipelines. Experience designing and optimizing API integrations in data pipelines. Familiarity with data quality and observability tools such as Soda or similar platforms. Proficiency with version control and CI/CD workflows using GitHub. Advanced understanding of dimensional modeling and data warehousing concepts. Comfortable leading efforts in agile environments with strong ownership and collaboration.

Similar remote jobs

Similar jobs in Windsor, CT

Similar jobs in Connecticut