Tallo logoTallo logo

Senior Data Engineer

Job

Photon

Dallas, TX (In Person)

Full-Time

Posted 2 days ago (Updated 14 hours ago) • Actively hiring

Expires 6/8/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
86
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Job Title:
Senior Data Engineer (Data Federation & Lakehouse)
Location:
Dallas, NJ, NY, Chicago (onsite for 5 days) Full time on W2 Interview process: HRT -> internal Interview -> Client interview
Responsibilities:
As a Senior Data Engineer, you will be responsible for breaking down data silos. This role focuses on building a unified, high-performance data layer using Data Federation techniques. You won't just move data; you will architect a Data Lakehouse environment where disparate sources feel like a single, cohesive database for our analytics and AI teams.
Core Responsibilities:
Data Federation Architecture:
Design and implement federated query layers (e.g., Starburst/Trino) to allow high-speed analytics across distributed data sources without unnecessary data movement.
ETL/ELT Pipeline Development:
Build scalable, distributed data processing pipelines using Python and Apache Spark (PySpark).
Lakehouse Implementation:
Manage and optimize modern table formats like Delta Lake, Apache Iceberg, or Hudi to bring ACID transactions to our data lake.
Performance Tuning:
Optimize Spark jobs and SQL queries across the federation layer to minimize latency and manage compute costs.
Governance & Security:
Implement fine-grained access control and data masking within the federation engine to ensure data privacy across all connected platforms.
Technical Requirements Python & Spark:
5+ years of experience with Python and deep expertise in Apache Spark tuning (partitioning, shuffling, caching).
Data Federation Tools:
Hands-on experience with Starburst Enterprise, Trino (Presto), or Dremio.
Lakehouse Ecosystem:
Proven track record working with Delta Lake or Iceberg architectures.
Cloud Platforms:
Extensive experience with AWS (EMR, S3, Glue), Azure (Databricks, ADLS), or Google Cloud Platform.
SQL Mastery:
Expert-level SQL skills for complex analytical queries and query plan analysis.
Data Modeling:
Proficiency in designing Star/Snowflake schemas and understanding "Medallion Architecture" (Bronze, Silver, Gold layers).

Similar remote jobs

Similar jobs in Dallas, TX

Similar jobs in Texas