Rapid and Safe Trajectory Planning over Diverse Scenes through Diffusion Composition

Anonymous Authors1

Teaser


Abstract

Safe trajectory planning remains a significant challenge in complex, heterogeneous environments. Traditional approaches typically face a trade-off between computational efficiency and safety: comprehensive obstacle modeling enhances safety but involves high computational overhead, whereas approximate approaches improve computational efficiency at the expense of potentially reduced safety. To address this issue, this paper introduces a rapid and safe trajectory planning framework based on the state-based diffusion model. Leveraging only low-dimensional vehicle states, the diffusion approach achieves notable inference efficiency. Additionally, by composing diffusion models, the proposed framework can generalize safely across various scenarios, effectively navigating scenes not encountered during training. To further guarantee the safety of the generated trajectories, an efficient, rule-based safety filter is proposed, selecting optimal trajectories that fulfill stringent safety and consistency criteria from a batch of candidate trajectories. In a single scenario, the proposed method achieves a mean inference time of only 0.21 seconds while maintaining high stability and safety standards. Evaluations on the F1TENTH real-world platform demonstrate that the composed model successfully generalizes to previously unseen scenarios, and the resulting trajectories can be reliably followed by straightforward controllers to accomplish navigation tasks.


The Performance of Model Composition in Simulation

Weights of composition - [Dynamic Model, Static Model]:  

Dynamic Model

Static Model
Composed Result
Effectiveness of Models Composition and Sensitivity of Compositional Weights. The composed model can safely generalize to unseen scenes when appropriate compositional weights are selected, achieving a collision-free trajectory. However, increasing the weight assigned to conditional models results in a rise in failure rate across both test scenes. These results demonstrate that the generalization capability is sensitive to compositional weights, and maintaining appropriate weights is critical for minimizing failures.

Real-World Validation of Individual Static Model

replay

replay
replay

replay

replay
replay
This paper proposes a rapid trajectory generation approach that integrates SLAM-based perception with a diffusion model. Unlike end-to-end methods, it uses perception modules to extract low-dimensional vehicle states, reducing the computational cost of iterative denoising while retaining the diffusion model's generative power to ensure trajectory feasibility and safety.

Real-World Validation of Composition for UNSEEN Scenes

Static Diffusion Model replay

Dynamic Diffusion Model replay
Composed Result replay
This model composition approach ensures collision-free trajectory planning in unseen scenes for safety. The proposed method can make optimal test-time decisions to generate safe behaviors, such as accelerating to bypass or decelerating to avoid obstacles , thus allowing trajecoty planning for diverse scenes without retraining.

Scene 1 - view 1:

Initial Pose: [0.86, 0.03] replay

Initial Pose: [2.73, 0.32] replay
Initial Pose: [3.03, -0.07] replay

Scene 1 - view 2:

Initial Pose: [0.86, 0.03] replay

Initial Pose: [2.73, 0.32] replay
Initial Pose: [3.03, -0.07] replay

Scene 2 - view 1:

Initial Pose: [0.82, 0.07] replay

Initial Pose: [2.76, 0.47] replay
Initial Pose: [2.79, -0.11] replay

Scene 2 - view 2:

Initial Pose: [0.82, 0.07] replay

Initial Pose: [2.76, 0.47] replay
Initial Pose: [2.79, -0.11] replay


BibTeX


      To be updated soon.