Tallo logoTallo logo

Applied Scientist - Perception (SLAM/VIO), Fauna

Job

Amazon.com, Inc.

New York, NY (In Person)

Full-Time

Posted 2 days ago (Updated 4 hours ago) • Actively hiring

Expires 6/9/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
90
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Description We are seeking an Applied Scientist to develop and optimize Visual Inertial Odometry (VIO) and sensor fusion systems for our intelligent robots. In this role, you will design, implement, and deploy state estimation and tracking algorithms that enable robots to understand their position and motion in real time, even in challenging and dynamic environments. You will own the full pipeline from algorithm development through embedded deployment, ensuring that perception systems run efficiently on resource-constrained robotic hardware. You will also leverage modern machine learning approaches to push the boundaries of classical perception methods, combining learned representations with geometric techniques to achieve robust, real-time performance. This is a deeply hands-on role. You will work directly with sensors, hardware, and real-world data, while prototyping, testing, and iterating in physical environments. The ideal candidate has strong foundations in VIO and sensor fusion, practical experience optimizing algorithms for embedded platforms, and familiarity with how modern deep learning is transforming perception. Key job responsibilities
  • Design and implement Visual Inertial Odometry algorithms for robust real-time state estimation on robotic platforms like Sprout
  • Develop multi-sensor fusion pipelines integrating cameras, IMUs, and other sensing modalities for accurate pose tracking
  • Optimize perception and tracking algorithms for deployment on embedded hardware (e.g., ARM, GPU-accelerated edge devices) under strict latency and power constraints
  • Apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches
  • Build and maintain calibration, evaluation, and benchmarking infrastructure for perception systems
  • Collaborate with hardware, controls, and navigation teams to integrate perception outputs into the robot's autonomy stack
  • Lead technical projects from research prototyping through production deployment Basic Qualifications
  • PhD, or Master's degree and 3+ years of applied research experience
  • Experience with any programming language such as Python, Java, C++
  • Hands-on experience developing and deploying Visual Inertial Odometry or visual-inertial SLAM systems
  • Strong understanding of multi-sensor fusion (cameras, IMUs, odometry) and state estimation (EKF, factor graphs)
  • Experience optimizing perception algorithms for embedded or resource-constrained hardware
  • Demonstrated hands-on experience with real sensor data, calibration, and physical robot platforms
  • Familiarity with modern ML approaches to perception (learned feature extraction, depth prediction, end-to-end odometry) Preferred Qualifications
  • Experience leading technical initiatives and key deliverables
  • Publication record at major robotics or computer vision conferences (e.g., ICRA, IROS, RSS, CVPR, ECCV)
  • Experience with real-time systems programming and performance profiling on ARM/GPU platforms
  • Experience with state estimation on legged robots
  • Experience with stereo vision systems, camera-IMU calibration, time synchronization, and sensor characterization
  • Track record of shipping VIO or SLAM systems to production on physical robots at scale
  • Experience with NVIDIA Jetson, Qualcomm RB5, or similar embedded AI platforms
  • Familiarity with
ROS/ROS2
  • Experience integrating learned perception modules (e.g., neural depth, feature matching networks) into geometric estimation pipelines
  • History of technical leadership and cross-functional collaboration Amazon is an equal opportunity employer and does not discriminate on the basis of protected veteran status, disability, or other legally protected status.
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner. The base salary range for this position is listed below. Your Amazon package will include sign-on payments and restricted stock units (RSUs). Final compensation will be determined based on factors including experience, qualifications, and location. Amazon also offers comprehensive benefits including health insurance (medical, dental, vision, prescription, Basic Life & AD&D insurance and option for Supplemental life plans, EAP, Mental Health Support, Medical Advice Line, Flexible Spending Accounts, Adoption and Surrogacy Reimbursement coverage), 401(k) matching, paid time off, and parental leave. Learn more about our benefits at https://amazon.jobs/en/benefits . USA, NY, New York
  • 172,400.00
  • 223,400.
00 USD annually

Similar remote jobs

Similar jobs in New York, NY

Similar jobs in New York