Tallo logoTallo logo

AWS Data engineer

Job

E-Solutions Inc.

Woodbridge Township, NJ (In Person)

Full-Time

Posted 2 weeks ago (Updated 1 day ago) • Actively hiring

Expires 6/7/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
82
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

AWS Data engineer (Iselin, NJ, 08830) | 04/24/26
Job Description Job Title :
 AWS Data Engineer with
Databricks and DBT Job Location :
 
Iselin, NJ Skill Cluster :
Data-AWS Glue with EMR, Spark, Redshift,Kinesis,S3,AWS Native Data Services.
Primary Skill :
 
CICS Job Description Responsibilities Design and Development of Data Pipelines :
Design, build, and optimize robust ETL/ELT pipelines using AWS services (S3, Glue, Lambda) and the Databricks platform (Spark, Delta Lake, DLT). Ingest and process large volumes of structured and semi-structured data from various sources (APIs, databases, streaming platforms like Kafka or Kinesis) into a centralized data lake or lakehouse.
Data Transformation and Modeling:
Develop and maintain data models (e.g., star/snowflake schemas, medallion architecture) optimized for analytics and BI tools using dbt (Data Build Tool). Write complex and efficient SQL queries and Python/PySpark code for data manipulation, transformation, and validation within the Databricks environment. Implement data quality checks, tests, and documentation as part of the dbt workflow, enforcing data governance and security standards.
Orchestration and Automation:
Orchestrate and monitor data workflows using Databricks Jobs or external tools like
AWS MWAA
(Managed Workflows for Apache Airflow). Implement CI/CD pipelines and version control (Git) for all data engineering artifacts (code, configurations, dbt models) to ensure reliable and consistent deployments.
Performance Optimization and Operations:
Monitor, troubleshoot, and resolve issues in production data pipelines and environments to ensure high performance, reliability, and cost-efficiency. Tune Spark jobs and optimize Delta Lake features (Z-Order, partitioning) to handle growing data volumes and complexity.
Collaboration and Support:
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Provide expertise and guidance on data best practices, promoting a culture of data quality and governance. Must Have skills SQLDBT core and DBT Cloud AWS (redshift)Data bricks with AWS SQL server DBStone branch scheduling tool Should understand CICD, GITWork in Agile environment with JIRAOther Skills required/ Good to have: Tableau experience Harness devops Proficient in Linux / Unix environments Shubham Saxena w: www.e-solutionsinc.
com e:
 Shubham.s@e-solutionsinc.com

Similar remote jobs

Similar jobs in Woodbridge Township, NJ

Similar jobs in New Jersey