Rust Jobs for Rustaceans 
The hottest Rust jobs in one place. Bookmark this page and tell a friend :)
Latest jobs
Showing 151-160 of 160 jobs

Software Engineer
Oculus VR
Active - posted 28 days ago

Software Engineer, Robot Software Platform
Wayve
Active - posted 28 days ago

Rust Software Engineer - Dragonfly Portfolio
Dragonfly
Active - posted 28 days ago

Software Engineer, App Runtime
Docker, Inc
Active - posted 28 days ago

GPU Systems (NVIDIA) Software Engineer
Edera
Active - posted 28 days ago

Senior / Staff Software Engineer (Database)
Materialize
Active - posted 28 days ago

Senior Backend Engineer - Autonomous Agents, Starfleet
xAI
Active - posted 28 days ago

Senior Software Engineer, Profiling
Sentry
Active - posted 29 days ago

Sr Software Engineer Embedded
NRG Energy
Stale - posted 31 days ago

Senior Software Engineer, Simulation Tooling
Zipline
Stale - posted 31 days ago

Senior Staff Engineer
MongoDB
Active - posted 3 days ago
Job Description
Key Responsibilities:
We’re looking for a Staff Engineer to join our team building the inference platform for embedding models that power semantic search, retrieval, and AI-native features across MongoDB Atlas.
This role is part of the broader Search and AI Platform team and involves close collaboration with AI engineers and researchers from our Voyage.ai acquisition, who are developing industry-leading embedding models. Together, we’re building the infrastructure that enables real-time, high-scale, and low-latency inference — all deeply integrated into Atlas and optimized for developer experience.
As a Staff Engineer, you’ll be hands-on with design and implementation, while working with engineers across experience levels to build a robust, scalable system. The focus is on latency, availability, observability, and scalability in a multi-tenant, cloud-native environment.
- Partner with Search Platform and Voyage.ai AI engineers and researchers to productionize state-of-the-art embedding models and rerankers, supporting both batch and real-time inference
- Lead key projects around performance optimization, GPU utilization, autoscaling, and observability for the inference platform
- Design and build components of a multi-tenant inference service that integrates with Atlas Vector Search, driving capabilities for semantic search and hybrid retrieval
- Contribute to platform features like model versioning, safe deployment pipelines, latency-aware routing, and model health monitoring
- Collaborate with peers across ML, infra, and product teams to define architectural patterns and operational practices that support high availability and low latency at scale
- Guide decisions on model serving architecture using tools like vLLM, ONNX Runtime, and container orchestration in Kubernetes
Who You Are:
- 8+ years of engineering experience in backend systems, ML infrastructure, or scalable platform development
- Expertise in serving embedding models in production environments
- Strong systems skills in languages like Go, Rust, C++, or Python, and experience profiling and optimizing performance
- Comfortable working on cloud-native distributed systems, with a focus on latency, availability, and observability
- Familiarity with inference runtimes and vector search systems (e.g., Faiss, HNSW, ScaNN)
- Proven ability to collaborate across disciplines and experience levels, from ML researchers to junior engineers
- Experience with high-scale SaaS infrastructure, particularly in multi-tenant environments
Nice to Have:
- Prior experience working with model teams on inference-optimized architectures
- Background in hybrid retrieval, prompt-based pipelines, or retrieval-augmented generation (RAG)
- Contributions to relevant open-source ML serving or vector search infrastructure
Why Join Us:
- Be part of shaping the future of AI-native developer experiences on the world’s most popular developer data platform
- Collaborate with ML experts from Voyage.ai to bring cutting-edge research into production at scale
- Solve hard problems in real-time inference, model serving, and semantic retrieval — in a system used by thousands of customers worldwide
- Work in a culture that values mentorship, autonomy, and strong technical craft
- Competitive compensation, equity, and career growth in a hands-on technical leadership role