People Matter

MLOps Engineer: Perception & Foundation Models

Zendar

Zendar

Île-de-France, France · Paris, France · France
Posted on Mar 15, 2026
Zendar is looking for an MLOps Engineer to join our Paris office. We are currently deploying one of the world's most advanced 360-degree radar-based perception systems and are expanding our capabilities through the early fusion of camera and radar. As we scale our foundation model training efforts, we need a seasoned MLOps engineer to build and own the infrastructure that makes it all possible.

This is a unique opportunity to shape the ML platform of a team unburdened by legacy tooling. You will define, own, and build the operational backbone that enables our research engineers to train, iterate, and ship perception models at scale.

About Zendar

Zendar is building perception for physical AI — giving engineers a strong foundation for creating world-class robotics applications. At Zendar, you’ll work on perception foundation models that enable robots to understand and interact with their environments across a wide range of industries.

Zendar pioneered RF perception that delivers a vision-like, semantically segmented understanding of the environment — running on embedded automotive systems using only radar data. This RF perception forms the backbone of Zendar’s next-generation foundation models, which are built around early fusion of RF and vision data.

This architecture inverts the traditional perception stack. Instead of treating RF signals as secondary, Zendar’s models combine vision’s high angular resolution with RF’s strong temporal and spatial understanding at the earliest stages of perception. The result is a system that sees farther, remains robust to occlusion and adverse weather, and operates far more efficiently than vision-only or lidar-based approaches.

See a demo of Zendar’s foundational RF perception

At Zendar, you’ll work at the cutting edge of autonomous mobility and robotics—advancing foundation models that will power the next generation of physical AI systems. You’ll work with large-scale, real-world, multi-modal datasets composed of synchronized and calibrated radar, camera, and lidar data collected across multiple continents.

Our team brings together deep expertise across hardware, signal processing, machine learning, and software engineering, with decades of experience in sensing and perception. We are a global team with offices in Berkeley, Lindau (Germany), and Paris (France). Zendar is well-funded by leading Tier-1 venture capital firms and has established strong industry partnerships.

Although AI is central to what we build, our hiring process is intentionally human: every résumé is reviewed by a real person.

Your Role

As a MLOps Engineer in our MLOps & Data Engineering team, your goal is to build and operate the infrastructure that powers the training and validation of our multi-modal foundation models. You will work in close partnership with the Perception & ML team, acting as the platform layer that eliminates friction from the research-to-production loop so that research engineers can move faster and with greater confidence.

Why This Role Is Exciting

  • Ownership: Working closely with the MLOps Lead, you will help define and shape the MLOps strategy from the ground up.
  • Scale: You will operate infrastructure handling real-world datasets spanning tens of thousands of kilometers across multiple continents.
  • Impact: Your work directly accelerates the delivery of perception models validated on real vehicles/devices.

What You Will Do

  • Maintain & Improve Training Infrastructure: Contribute to and maintain scalable training pipelines on GPU clusters, optimizing container images (Docker/NVIDIA) for training workloads — minimizing build times, image sizes, and cold-start latency. Act as the team's primary reference for ML coding best practices, guiding research engineers toward production-grade, maintainable ML training code.
  • Drive Experiment Tracking: Build and standardize the team's use of Weights & Biases (WandB) for experiment tracking, hyperparameter sweeps, and model registry. Ensure every training run is fully reproducible and traceable.
  • Manage Dataset Versioning & Lineage: Implement robust dataset versioning and tracking workflows using WandB Artifacts or equivalent tooling. Maintain full lineage from raw sensor data to training-ready splits across our multi-continental dataset.
  • Accelerate the Iteration Loop: Reduce time-to-first-result for new experiments by streamlining job scheduling, data loading pipelines, and environment management.
  • Ensure Reliability at Scale: Monitor training jobs, proactively detect failures, and build alerting and recovery mechanisms so large training runs complete without costly interruptions.
  • Collaborate on Deployment Pipelines: Partner with embedded and platform teams to package and export trained models (ONNX, TensorRT), ensuring smooth handoff from training to on-vehicle inference.
  • Document, Standardize & Upskill: Establish and document best practices for experiment management, and actively support the Perception & ML team in adopting them — through code reviews, pair-working sessions, and hands-on guidance — so the whole team follows consistent, auditable workflows.

What We Look For

  • Experience: 5+ years in MLOps, ML infrastructure, or a closely related role, with demonstrated ownership of production-grade ML platforms. Must have hands-on experience training ML models end-to-end, from data preparation through to evaluation and deployment.
  • Engineering Proficiency: Proficient in Python; comfortable reading and debugging PyTorch training code. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, or similar).
  • Container Expertise: Deep experience with Docker and container optimization for GPU workloads; familiarity with NVIDIA container toolkit and multi-stage build strategies.
  • Orchestration & Scheduling: Experience with distributed training orchestration tools (Kubernetes, Slurm, Ray, or equivalent) in a GPU cluster environment.
  • Data Pipeline Fluency: Experience building and maintaining large-scale data pipelines (e.g., with DVC, Delta Lake, or custom tooling).
  • Reliability Mindset: Strong instincts for observability, monitoring, and failure recovery in long-running distributed jobs.

Bonus Points

  • Autonomous Driving / Robotics: Prior exposure to sensor data (camera, radar, Lidar) and the storage and preprocessing challenges that come with it.
  • Performance Optimization: Experience profiling and optimizing data loading bottlenecks and mixed-precision training.
  • Model Export & Serving: Familiarity with ONNX, TensorRT, and the constraints of real-time embedded inference.
  • Experiment & Data Tracking: Hands-on expertise with Weights & Biases, including experiment tracking, WandB Sweeps, Artifacts, and the Model Registry.
  • Multi-cloud / HPC: Experience managing training infrastructure across cloud providers (AWS, GCP, Azure) or HPC environments.
  • Foundation Model Training: Understanding of scaling laws, checkpoint management, and the operational challenges of training large models from scratch.

What We Offer

  • Opportunity to make an impact at a young, venture-backed company in an emerging market
  • Competitive salary ranging from €75,000 to €95,000 annually depending on experience and equity
  • Hybrid work model: in office 3 days per week (Monday, Tuesday, Thursday), the rest… work from wherever!
  • Modern Workspace: Fully equipped, modern office in the heart of Paris
  • Transportation/Commute: Commuter benefits (e.g., partial reimbursement for public transport or cycling programs, where applicable)
  • Subsidized meal vouchers (tickets restaurant)
  • Wellness Pass (ex Gymlib)

Zendar is committed to creating a diverse environment where talented people come to do their best work. We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.