Video generative models trained on expert demonstrations have been utilized as performant text-conditioned visual planners for solving robotic tasks. However, generalization to unseen tasks remains a challenge. Whereas improved generalization may be facilitated by leveraging learned prior knowledge from additional pre-collected offline data sources, such as web-scale video datasets, in the era of experience we aim to design agents that can continuously improve in an online manner from self-collected behaviors. In this work we thus propose the Self-Improving Loops for Visual Robotic Planning (SILVR), where an in-domain video model iteratively updates itself on self-produced trajectories, and steadily improves its performance for a specified task of interest. We apply SILVR to a diverse suite of MetaWorld tasks, as well as two manipulation tasks on a real robot arm, and find that performance improvements continuously emerge over multiple iterations for novel tasks unseen during initial in-domain video model training. We demonstrate that SILVR is robust in the absence of human-provided ground-truth reward functions or expert-quality demonstrations, and is preferable to alternate approaches that utilize online experience in terms of performance and sample efficiency.
SILVR has access to two pretrained video generative models: one pretrained generally on internet-scale data and another pretrained on a general set of in-domain demonstrations. SILVR utilizes these components for visual planning (such as combining them through adaptation), and improves its decision-making performance by updating the in-domain video model on its own self-collected experience. In this way, SILVR effectively combines offline data with online experience to iteratively bootstrap an in-domain video model into a strong visual planner, even for novel tasks of interest.
We evaluate SILVR on two main robot settings: a real-world Franka Emika Panda robot arm and the MetaWorld-v2 simulated environment. We utilize the Panda arm for two distinct tasks: pushing a colored cup and opening a colored drawer, where generalization is evaluated over unseen colors and combinations; for MetaWorld, generalization is across novel tasks with their own visual settings. We note that all tasks visualized below are novel, in that the video model used for visual planning had never accessed any demonstrations for such tasks during initial pretraining - all iterative performance gains arise from utilizing self-collected experience through SILVR.
A natural question is to what extent performance trends hold over a large amount of iterations. We demonstrate quantitatively that overall performance still does monotonically increase, but there is saturation and diminishing returns starting from Iteration 5. Furthermore, as visual planning can be slow in inference time due to its requirement in simulating visual futures, we investigate the performance of distilling the final visual planning components into a behavior cloning policy. We discover that distillation can preserve and even slightly outperform the final performance achieved by the visual planning components through self-improvement. Thus, SILVR supports a balance between visual planning, which is slower but superior in self-improvement capabilities, and distillation into a lightweight policy that can provide both fast inference along with high task performance.
@inproceedings{
luo2026selfimproving,
title={Self-Improving Loops for Visual Robotic Planning},
author={Calvin Luo and Zilai Zeng and Mingxi Jia and Yilun Du and Chen Sun},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026}
}