Tallo logoTallo logo

AWS Data Engineer :: Onsite :: W2 Position

Job

Trebecon LLC

Denver, CO (In Person)

Full-Time

Posted 3 days ago (Updated 9 hours ago) • Actively hiring

Expires 6/8/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
83
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Job Title:
Senior Data Engineer Location:
Denver, CO (Onsite) Job Summary We are seeking a highly skilled Senior Data Engineer to design, build, and optimize scalable cloud-based data platforms and pipelines. The ideal candidate will have strong expertise in AWS data services, Databricks, Snowflake, and modern data engineering practices for enterprise-scale analytics, data warehousing, and real-time processing environments. This role requires hands-on experience developing robust ETL/ELT pipelines, implementing data lake and data warehouse architectures, and ensuring high standards for data quality, testing, and operational excellence. Key Responsibilities Design, develop, and maintain scalable batch and real-time data pipelines on AWS and Databricks platforms. Build and optimize enterprise data lake and data warehouse solutions using Redshift, Snowflake, Delta Lake, and Apache Iceberg. Develop ETL/ELT workflows using Python, SQL, Spark, and cloud-native technologies. Work with AWS services including S3, Step Functions, EventBridge, CloudWatch, Glue, Lambda, Kinesis, and EMR. Implement and manage data governance and metadata solutions using Unity Catalog and Glue Catalog. Create performant data models and dimensional schemas to support analytics and reporting needs. Integrate streaming and event-driven architectures using Kafka and AWS streaming services. Collaborate with cross-functional teams including Data Analysts, Architects, DevOps, and Business Stakeholders. Ensure high code quality through unit testing, integration testing, and detailed testing documentation. Build and maintain CI/CD pipelines and version control processes using Git and automation tools. Support Infrastructure as Code (IaC) practices using Terraform or similar technologies. Monitor, troubleshoot, and optimize data workflows for reliability, scalability, and performance. Participate in architecture discussions and recommend best practices for modern data engineering solutions. Required Skills & Qualifications Technical Skills Strong hands-on experience with AWS data ecosystem: Redshift S3 Step Functions EventBridge CloudWatch AWS Glue Lambda Kinesis EMR Expertise in Databricks technologies: Apache Spark Delta Lake Apache Iceberg Unity Catalog Strong experience with Snowflake data platform. Advanced SQL and Python programming skills. Experience building batch and real-time data processing pipelines. Strong understanding of data warehousing concepts and dimensional modeling. Experience with CI/CD implementation and Git-based development workflows. Familiarity with Infrastructure as Code tools such as Terraform. Experience with orchestration and open-source tools such as Apache Airflow and dbt is a plus. Knowledge of streaming technologies including Kafka/MSK is preferred. Soft Skills Strong analytical and troubleshooting skills. Ability to manage priorities across multiple projects simultaneously. Excellent organizational and communication skills. Strong focus on code quality, testing, and documentation. Ability to work effectively in a collaborative onsite environment.

Similar remote jobs

Similar jobs in Denver, CO

Similar jobs in Colorado