SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion

Published in ICLR 2026, Rio de Janeiro, Brazil , 2026

Recommended citation: Hotegni, S.S., Peitz, S. (2026). SPREAD: Sampling-based Pareto front Refinement via Efficient Adaptive Diffusion. In: 14th International Conference on Learning Representations, ICLR 2026. International Conference on Learning Representations, ICLR.

Developing efficient multi-objective optimization methods to compute the Pareto set of optimal compromises between conflicting objectives remains a key challenge, especially for large-scale and expensive problems. To bridge this gap, we introduce SPREAD, a generative framework based on Denoising Diffusion Probabilistic Models (DDPMs). SPREAD first learns a conditional diffusion process over points sampled from the decision space and then, at each reverse diffusion step, refines candidates via a sampling scheme that uses an adaptive multiple gradient descent-inspired update for fast convergence alongside a Gaussian RBF-based repulsion term for diversity. Empirical results on multi-objective optimization benchmarks, including offline and Bayesian surrogate-based settings, show that SPREAD matches or exceeds leading baselines in efficiency, scalability, and Pareto front coverage. Code is available at https://github.com/safe-autonomous-systems/moo-spread.

Read paper here