Tallo logoTallo logo

Data Engineer — ETL & Data Integration

Job

Naxxa

Draper, UT (In Person)

$107,500 Salary, Full-Time

Posted 1 week ago (Updated 1 day ago) • Actively hiring

Expires 6/13/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
84
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Data Engineer — ETL & Data Integration Draper, UT 84020 $75,000
  • $140,000 a year
  • Full-time, Contract $75,000
  • $140,000 a year
  • Full-time, Contract Data Engineer — ETL & Data Integration About this position We are looking for a Data Engineer who specializes in ETL pipeline development, web scraping, and data integration.
This role is responsible for collecting data from diverse sources (APIs, databases, web scraping, ArcGIS), cleaning, normalizing, merging, and loading it into our warehouse in a format that dbt models can reliably consume. You will own data consistency across sources and serve as the primary point of accountability for data quality at the ingestion layer. Core Responsibilities ETL Pipeline Development Design, build, and maintain Python-based ETL pipelines that extract data from APIs, databases, flat files, and web sources Implement incremental loading strategies, deduplication logic, and idempotent pipeline runs Build robust error handling, retry logic, and alerting for pipeline failures Document data lineage from source through transformation to warehouse landing tables Web Scraping & Data Collection Develop and maintain production-grade web scrapers using tools such as Scrapy, Selenium, Playwright, BeautifulSoup, or Claude Code Handle anti-scraping measures (rate limiting, rotating proxies, CAPTCHA mitigation, dynamic rendering) Monitor scraping targets for schema changes and adapt collectors accordingly Ensure compliance with robots.txt, terms of service, and legal data collection requirements Data Quality & Consistency Validate data consistency across disparate sources, identifying and reconciling conflicts Define and enforce data contracts (schemas, types, expected ranges, null policies) at the ingestion boundary Build automated data quality checks and anomaly detection at the loading stage Collaborate with analytics engineers to ensure warehouse tables conform to expectations of downstream dbt models SQL & dbt Integration Write and optimize SQL for data transformations, staging table definitions, and ad-hoc investigation Structure landing/staging tables so dbt models can reference them cleanly with minimal rework Work with the dbt team to define source YAML configurations, freshness checks, and testing at the source layer Troubleshoot data issues surfaced by dbt tests and trace them back to the ingestion layer
Required Qualifications Technical Skills Python:
Proficient in writing production-quality data pipelines. Comfortable with requests, asyncio/aiohttp, pandas or polars, and standard data processing libraries.
SQL:
Strong command of analytical SQL (CTEs, window functions, joins across heterogeneous schemas, performance tuning).
dbt:
Working knowledge of dbt (sources, staging models, tests, freshness checks). Able to structure ingestion outputs to integrate cleanly with an existing dbt project.
Web Scraping:
Demonstrated experience building and maintaining scrapers in production. Familiarity with at least two of: Scrapy, Selenium, Playwright, BeautifulSoup, or equivalent tools.
Data Warehousing:
Experience loading data into warehouses and Postgres. Understanding of partitioning, clustering, and storage optimization. Experience 3+ years of professional experience in data engineering, ETL development, or a closely related role Track record of building and operating data collection pipelines from external and heterogeneous sources Demonstrated history of owning data quality outcomes, not just pipeline uptime Experience working alongside or within analytics engineering teams using dbt Other Requirements Strong version control habits (Git) and comfort with code review workflows Familiarity with orchestration tools (Airflow preferred) Solid understanding of data normalization principles, schema design, and relational modeling Effective communicator who can articulate data issues to both technical and non-technical stakeholders Highly skilled in vibe coding and understands when AI is helpful and when it is harmful. Knows how to validate the results and not rely on it Preferred Qualifications Familiarity with containerized deployments (Docker) and CI/CD for data pipelines Enjoys optimization of code Familiar with airflow Enjoys statistics
Pay:
$75,000.00
  • $140,000.
00 per year
Work Location:
In person

Similar remote jobs

Similar jobs in Draper, UT

Similar jobs in Utah