Tallo logoTallo logo

Data Engineer II - AWS/PySpark/ETL

Job

JPMorgan Chase Bank, N.A.

Columbus, OH (In Person)

Full-Time

Posted 2 weeks ago (Updated 5 days ago) • Actively hiring

Expires 6/3/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
82
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Job Description Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team. As a Data Engineer II at JPMorgan Chase within the Consumer and Community Banking and Data Technology, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Job responsibilities
  • Develops secure high-quality production code, and reviews and debugs code written by others
  • Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Collaborate closely with cross-functional teams to develop efficient data pipelines to support various data-driven initiatives
  • Implement best practices for data engineering, ensuring data quality, reliability, and performance
  • Contribute to data modernization efforts by leveraging cloud solutions and optimizing data processing workflows
  • Perform data extraction and implement complex data transformation logic to meet business requirements
  • Leverage advanced analytical skills to improve data pipelines and ensure data delivery is consistent across projects
  • Monitor and executes data quality checks to proactively identify and address anomalies
  • Ensure data availability and accuracy for analytical purposes
  • Communicate technical concepts to both technical and non-technical stakeholders Required qualifications, capabilities, and skills
  • Formal training or certification on data engineering concepts and 2+ years applied experience
  • Experience with ETL tools like Ab Initio, Informatica, Data Pipeline and workflow management tools (Airflow, etc.)
  • Strong hands on coding experience with PySpark, Python and AWS
  • Experience working with modern
Data Lakes :
(Snowflake, Databricks etc.)
  • Hands-on practical experience delivering system design, application development, testing, and operational stability
  • Very strong problem solving skills
  • Proficiency in automation and continuous delivery methods Preferred qualifications, capabilities, and skills
  • Advanced in one or more programming language(s) like SQL, Java etc
  • Proficient in all aspects of the Software Development Life Cycle
  • Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Demonstrated proficiency in software applications and technical processes within a.
..

Similar remote jobs

Similar jobs in Columbus, OH

Similar jobs in Ohio