Tallo logoTallo logo

Data Engineer

Job

Berkley

Greenwich, CT (In Person)

Full-Time

Posted 4 weeks ago (Updated 3 weeks ago) • Actively hiring

Expires 5/28/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
81
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Company Details "Our Company provides a state of predictability which allows brokers and agents to act with confidence." Founded in 1967, W. R. Berkley Corporation has grown from a small investment management firm into one of the largest commercial lines property and casualty insurers in the United States. Along the way, we've been listed on the New York Stock Exchange, become a Fortune 500 Company, joined the S&P 500, and seen our gross written premiums exceed $10 billion. Today the Berkley brand comprises more than 60+ businesses worldwide and is divided into two segments: Insurance and Reinsurance and Monoline Excess. Led by our Executive Chairman, founder and largest shareholder, William. R. Berkley and our President and Chief Executive Officer, W. Robert Berkley, Jr., W.R. Berkley Corporation is well-positioned to respond to opportunities for future growth. The Company is an equal employment opportunity employer. Responsibilities We are seeking a Data Engineer with strong engineering, coding, and problem‑solving skills to design, build, and operate data platforms that support actuaries, analytics, modeling, and AI‑enabled workflows. This role is suited to someone who is technically strong, comfortable working independently, and able to translate complexity into robust, well‑designed systems that others can rely on. The position emphasizes engineering rigor, high‑quality code, system reliability, and sound judgment over one‑off solutions or purely mechanical implementations. We seek someone to challenge the status quo and find better ways to build and operate data systems. You will advocate for the thoughtful application of modern data engineering, data science, and AI approaches. Responsibilities Write production‑quality code for data ingestion, transformation, orchestration, and monitoring. Design, build, and maintain reliable, scalable data pipelines and data platforms, including batch or distributed processing workloads (e.g., Spark‑based pipelines). Partner with actuaries, analytics, data science, and business teams to enable modeling and AI uses. Apply AI‑assisted engineering approaches, including LLM‑enabled tools or agents, to improve data quality, observability, documentation, and productivity. Identify data quality issues, bottlenecks, and failure modes; design systems that are resilient and observable. Stay current with data engineering and AI platform advancements, evaluate new tools, and recommend adoption where appropriate. Apply professional skepticism and alternate approaches to validate data correctness, lineage, and assumptions. Communicate system design, trade‑offs, and limitations clearly to technical and non‑technical stakeholders. Provide support and guidance to others who are at earlier stages in their data engineering or AI journey. Qualifications 4-7 years of relevant data engineering, software engineering, or technical experience. A Master's degree in Data Engineering or Computer Science. Familiarity with cloud data platforms and distributed processing frameworks (e.g., Databricks, Snowflake, Spark, or similar), and modern data engineering tooling. Strong programming skills, particularly in Python and SQL (including experience with distributed or batch processing frameworks such as PySpark or equivalent), with an emphasis on maintainable, testable code. Experience designing and operating data pipelines, data lakes/warehouses, or distributed data systems. Experience applying AI, machine learning, or LLM‑based tools to real engineering problems (e.g., building agents, calling model APIs, integrating AI into engineering workflows). Experience working with large or complex data flows and creating defensible system designs and implementation plans. Strong professional judgment, curiosity, and attention to detail. Sponsorship Details Sponsorship not Offered for this Role

Similar remote jobs

Similar jobs in Greenwich, CT

  • Job

    Credit Analyst

    Verition Group LLC

    Greenwich, CT

    Posted1 day ago

    Updated4 hours ago

  • Job

    Tax Manager

    CM Group

    Greenwich, CT

    Posted2 days ago

    Updated4 hours ago

  • Job

    Creative Financial Staffing

    Greenwich, CT

    Posted2 days ago

    Updated1 day ago

  • Job

    Aya Healthcare corporate careers

    Greenwich, CT

    Posted2 days ago

    Updated4 hours ago

  • Job

    MSICU Registered Nurse

    Yale New Haven Health

    Greenwich, CT

    Posted2 days ago

    Updated4 hours ago

Similar jobs in Connecticut