Data Engineer II, MI Data Position Available In New York, New York

Tallo's Job Summary: This job listing has been recently added. Tallo will add a summary here for this job shortly.

Company:
Point72
Salary:
JobFull-timeOnsite

Job Description

Design and build solutions that enable investment professionals to effortlessly, extract insights from large, complex compliance approved structured andunstructured data sets. Work on Named Entity Recognition and Theme Extraction methods. Work with internal stakeholders and internal UI/UX team to add the resultsto research analysts day to day workflow using technologies such as FastAPI, embedding models, etc. Work on Retrieval Augmented Generation applications toallow retrieval and content generation on proprietary data using technologies such as vendor database and streamlit. Work closely with investment professionals,researchers, and data scientists to design, build, and launch robust end-to-end data pipelines and tools that that help extract the most value out of data assets in theEnergy and Industrials sectors. Develop and support big data processing pipelines including but not limited to ingestion, transformation, load, and end-customer delivery. Mentor new joiners and interns to pick up best practices and ways of supporting infrastructure and sector work. Serve as person on support, keeping all ofour data pipelines running smoothly and answering any platform related questions. Coordinate between different internal teams (Data Science, Data Engineering, Data Research, Data Sourcing, Product) to plan and execute different steps of product development. This job is fully in person and cannot be done remotely.

Requires a Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or related field. Position requires experience (3 years with aBachelor’s or 1 year with a Master’s). Must have some experience in each of the following skills: Python development. Pandas, NumPy, Matplotlib. Writing advancedSQL queries. Design and implementation of end-to-end data pipelines. Transferring large unstructured datasets into valuable investment research inputs. Big datatechnologies -Spark, Databricks, Delta, and Polars. Spark or Scala. Graph databases – neo4j, cypher, graph theory, GQL and SPARQL is a plus. Understanding of statisticsand advanced modeling techniques. Understanding of Natural Language Processing models. AWS services. Write code that pulls data from APIs, SFTP, s3. Track recordof delivering novel and innovative solutions to challenges. Ability to work cross-functionally and communicate problems and solutions effectively.

Reference job ID #63.

Minimum Salary:

190000

Maximum Salary:

280000

Salary Unit:

Yearly

Other jobs in New York

Other jobs in New York

Start charting your path today.

Connect with real educational and career-related opportunities.

Get Started