Tallo logoTallo logo

Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA

Job

Enigma

Campbell, CA (In Person)

Full-Time

Posted 1 week ago (Updated 1 day ago) • Actively hiring

Expires 6/8/2026

Apply for this opportunity

This job application is on an outside website. Be sure to review the job posting there to verify it's the same.

Review key factors to help you decide if the role fits your goals.
Pay Growth
?
out of 5
Not enough data
Not enough info to score pay or growth
Job Security
?
out of 5
Not enough data
Calculating job security score...
Total Score
100
out of 100
Average of individual scores

Were these scores useful?

Skill Insights

Compare your current skills to what this opportunity needs—we'll show you what you already have and what could strengthen your application.

Job Description

Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA at Enigma Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA at Enigma in Campbell, California Posted in about 3 hours ago.
Type:
full-time
Job Description:
Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU |
Hybrid, San Jose, CA Title:
Machine Learning Engineer Location:
San Jose, CA Responsibilities:
Productize and optimize models from Research into reliable, performant, and cost-efficient services with clear SLOs (latency, availability, cost). Scale training across nodes/GPUs (DDP/FSDP/ZeRO, pipeline/tensor parallelism) and own throughput/time-to-train using profiling and optimization. Implement model-efficiency techniques (quantization, distillation, pruning, KV-cache, Flash Attention) for training and inference without materially degrading quality. Build and maintain model-serving systems (vLLM/Triton/TGI/ONNX/TensorRT/AITemplate) with batching, streaming, caching, and memory management. Integrate with vector/feature stores and data pipelines (FAISS/Milvus/Pinecone/pgvector; Parquet/Delta) as needed for production. Define and track performance and cost KPIs; run continuous improvement loops and capacity planning. Partner with ML Ops on CI/CD, telemetry/observability, model registries; partner with Scientists on reproducible handoffs and evaluations.
Educational Qualifications:
Bachelors in computer science, Electrical/Computer Engineering, or a related field required; Master's preferred (or equivalent industry experience). Strong systems/ML engineering with exposure to distributed training and inference optimization.
Industry Experience:
3-5 years in ML/AI engineering roles owning training and/or serving in production at scale. Demonstrated success delivering high-throughput, low-latency ML services with reliability and cost improvements. Experience collaborating across Research, Platform/Infra, Data, and Product functions.
Technical Skills:
Familiarity with deep learning frameworks: PyTorch (primary), TensorFlow. Exposure to large model training techniques (DDP, FSDP, ZeRO, pipeline/tensor parallelism); distributed training experience a plus
Optimization:
experience profiling and optimizing code execution and model inference: (PTQ/QAT/AWQ/GPTQ), pruning, distillation, KV-cache optimization, Flash Attention Scalable serving: autoscaling, load balancing, streaming, batching, caching; collaboration with platform engineers. Data & storage: SQL/NoSQL, vector stores (FAISS/Milvus/Pinecone/pgvector), Parquet/Delta, object stores. Write performant, maintainable code Understanding of the full ML lifecycle: data collection, model training, deployment, inference, optimization, and evaluation. Machine Learning Engineer | Python | Pytorch | Distributed Training | Optimisation | GPU | Hybrid, San Jose, CA

Similar remote jobs

Similar jobs in Campbell, CA

Similar jobs in California