Tallo logoTallo logo

System Solutions Architect for Numems AI Memory Engine & MRAM Architecture

Job

Numem

Mesa, AZ (In Person)

$187,500 Salary, Full-Time

Posted 8 weeks ago (Updated 7 weeks ago) • Actively hiring

Expires 5/27/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
100
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

About Numem Numem is redefining memory for AI. Our patented AI Memory Engine and MRAM-based architecture deliver SRAM-class performance, ultra lower power, and up to 2.5× density improvements, unlocking the next generation of memory solutions for edge AI, data center, automotive, and wearable devices. With major design engagements across TSMC, Samsung, and TI, and collaborations with leading tier 1 AI customers, we are pushing the frontier of high-performance memory at scale. Numem is a memory start-up that is looking to address the AI Memory Wall and unleash AI system level performance. We work with leading foundries to build products and plan to ramp revenue starting in 2027. Role Overview We are looking for an experienced hands on System Solutions Architect to help shape the next generation of Numem's MRAM-based AI memory systems. This role is central to defining and evolving the architecture of our AI Memory Engine, discrete MRAM products (up to 1GB+), and embedded memory subsystems across advanced process nodes (12nm to 5nm). You will lead System Solutions architectural innovation, guide design implementation, and collaborate with customers applications to solve real-world memory bottlenecks in AI-centric systems. Key Responsibilities Document AI Memory Engine architecture with Numem Staff for development with leading foundries. Own all the architecture specifications related to roadmap IP's and products. Work with Numem Staff on the software stack/API pertaining to Numem Products Help define and manage memory interface definition to support MRAM or other memory technologies (Examples - LPDDR5X, PCIe). Lead modeling and performance analysis for power, latency, bandwidth, endurance, and area trade-offs Collaborate with internal Engineering Team on Product Design and Test Engage directly with customers and partners to align architecture with product needs (automotive, edge, data center, medical, etc.) Develop the SoC model environment that includes model integration, use case simulation, and debugging Develop necessary scripts to help speed up debugging and visualization of the metrics collected from SoC model environment Develop and deliver system-level modeling and simulation environment for next-generation AI memory solutions Optimize memory system architecture for performance, power efficiency, and scalability. Develop and maintain hardware specifications, technical documentation, and test plans. Participate in technical discussions and provide input on hardware system design and architecture with industry ecosystem partners Required Qualifications BS/MS in Electrical Engineering or Computer Engineering 6+ years of experience in memory architecture and memory IO interfaces at advanced nodes Demonstrated profeciency in creating and driving architecture specifications Exposure to system Linux drivers for LPDDR, PCIe, CXL Strong system-level perspective, including interaction with compute cores and SoC interconnects Prior experience bringing memory subsystems from concept through silicon validation Excellent communication skills and ability to work cross-functionally in a fast-paced startup environment Detailed knowledge of cache subsystems including caching policies and understanding the tradeoffs of latency, bandwidth and hierarchies Detailed knowledge of memory subsystem design to include existing/emerging JEDEC memory standards Understand the role of CPU, memory hierarchy and accelerators for optimizing the system level power and performance for a complex workload Understand various levels of modeling (e.g. analytical, TLM, cycle accurate) and their tradeoffs Good working knowledge or experience in various simulation framework such as QEMU, Gem5, DRAMPower, DRAMSim and other relevant open source simulators is a plus Familiarity with memory and interface technologies, such as DRAM, HBM, NVMe, CXL and PCIe is a plus Why Join Us Help build the next generation memory architecture for AI based on MRAM and other memory technologies Work with leading semiconductor and AI companies on real-world products Collaborate with top-tier talent backed by investors from Cambium Capital and industry luminaries Influence the direction of an AI-first memory architecture from the grounds up with system level perspective Learning from leading memory architects and technologist to understand memory device and system level development.
Location:
Mesa, AZ Job Type:
Full-time Pay:
$175,000.00 - $200,000.00 per year
Benefits:
401(k) Dental insurance Flexible schedule Health insurance Paid time off Vision insurance
Work Location:
In person

Similar remote jobs

Similar jobs in Mesa, AZ

Similar jobs in Arizona