Self-Improving Loops for Visual Robotic Planning

1Brown University, 2Harvard University

Abstract

Video generative models trained on expert demonstrations have been utilized as performant text-conditioned visual planners for solving robotic tasks. However, generalization to unseen tasks remains a challenge. Whereas improved generalization may be facilitated by leveraging learned prior knowledge from additional pre-collected offline data sources, such as web-scale video datasets, in the era of experience we aim to design agents that can continuously improve in an online manner from self-collected behaviors. In this work we thus propose the Self-Improving Loops for Visual Robotic Planning (SILVR), where an in-domain video model iteratively updates itself on self-produced trajectories, and steadily improves its performance for a specified task of interest. We apply SILVR to a diverse suite of MetaWorld tasks, as well as two manipulation tasks on a real robot arm, and find that performance improvements continuously emerge over multiple iterations for novel tasks unseen during initial in-domain video model training. We demonstrate that SILVR is robust in the absence of human-provided ground-truth reward functions or expert-quality demonstrations, and is preferable to alternate approaches that utilize online experience in terms of performance and sample efficiency.

SILVR Framework

SILVR has access to two pretrained video generative models: one pretrained generally on internet-scale data and another pretrained on a general set of in-domain demonstrations. SILVR utilizes these components for visual planning (such as combining them through adaptation), and improves its decision-making performance by updating the in-domain video model on its own self-collected experience. In this way, SILVR effectively combines offline data with online experience to iteratively bootstrap an in-domain video model into a strong visual planner, even for novel tasks of interest.

Experiments

We evaluate SILVR on two main robot settings: a real-world Franka Emika Panda robot arm and the MetaWorld-v2 simulated environment. We utilize the Panda arm for two distinct tasks: pushing a colored cup and opening a colored drawer, where generalization is evaluated over unseen colors and combinations; for MetaWorld, generalization is across novel tasks with their own visual settings. We note that all tasks visualized below are novel, in that the video model used for visual planning had never accessed any demonstrations for such tasks during initial pretraining - all iterative performance gains arise from utilizing self-collected experience through SILVR.

Visual Planning with SILVR

SILVR is able to iteratively improve success rate on novel robotic tasks it has never seen initial demonstrations for, specified by natural language. This is facilitated by effective utilization of not only offline data (in-domain demonstrations on other tasks and potentially internet video datasets as well) but also online self-collected experience data through the SILVR framework.
Visual Plan
Environment Execution

SILVR without Experience Filtering

In real-world settings, requiring feedback annotations on online experience can be an expensive procedure. We therefore explore the performance of SILVR without experience filtering in real-world robot tasks, and discover that SILVR is still able to produce iterative improvements. This highlights how SILVR can leverage self-collected experience in a robust manner with respect to behavior quality and can enable scalable self-improvement, as filtering often requires some level of human intervention or carefully designed heuristics, particularly in the real world.
Visual Plan
Environment Execution

SILVR with VLM Experience Filtering

Another approach to avoid human intervention in SILVR is to provide trajectory feedback and filtering through large-scale VLMs. We observe that both GPT-5 and Gemini-2.5-Pro can still enable self-improving behavior across SILVR iterations when serving as a task success judge, in which Gemini achieves the best performance among all VLM filters. This is an encouraging finding, as it suggests that even for settings where manual curation of experience is expensive, current state-of-the-art VLMs can still be leveraged to provide useful task success evaluations and serve as robust alternative to ground-truth signals. Below, we showcase results from using Gemini-2.5-Pro on novel MetaWorld tasks:
Visual Plan
Environment Execution

SILVR with Suboptimal Data Initialization

Collecting high-quality initial datasets can also be an expensive cost on human time and effort. We therefore investigate the performance of SILVR when only suboptimal demonstration data, which heavily feature random actions, is available, and discover that for certain tasks performance indeed increases over iterations without utilizing any initial expert demonstrations. This highlights how SILVR can potentially enable cheaper task-specific learning, as the visual planner learns primarily from its own experience rather than relying on potentially expensive expert initial data collection.
Visual Plan
Environment Execution

Long-Horizon Task Evaluation

We evaluate our visual planners on long-horizon tasks that require pushing three cups of different colors forward in a text-specified order. Without being finetuned on any long-horizon demonstrations, we show that the self-improved visual planners can robustly solve the task by consecutively executing the three atomic cup-push subtasks decomposed from the original complex instruction.
Visual Plan
Environment Execution

Failure Cases of Final-Iteration Visual Planners

Below we show the failure cases from the final-iteration visual planners in both real-world and simulated setups. We observe that most failed attempts in real-world settings are caused by semantically incorrect visual plans (e.g., pushing the cup or opening the drawer of wrong colors). We acknowledge that the large-scale video models, despite powerful, have their limitations in generating instruction-aligned visual plans for every task configuration, especially when being constrained in a specific scene. In the simulated setup, we observe a mixture of execution and semantic errors. For example, the "plate-slide" task failed because the plate was obstructed by the net during the execution, even though the visual plan depicts the correct motion; whereas the video model fails to provide an optimal visual plan for solving the "button-press-wall" task.
Visual Plan
Environment Execution

SILVR Improvement Trends and Distillation

A natural question is to what extent performance trends hold over a large amount of iterations. We demonstrate quantitatively that overall performance still does monotonically increase, but there is saturation and diminishing returns starting from Iteration 5. Furthermore, as visual planning can be slow in inference time due to its requirement in simulating visual futures, we investigate the performance of distilling the final visual planning components into a behavior cloning policy. We discover that distillation can preserve and even slightly outperform the final performance achieved by the visual planning components through self-improvement. Thus, SILVR supports a balance between visual planning, which is slower but superior in self-improvement capabilities, and distillation into a lightweight policy that can provide both fast inference along with high task performance.

silvr_comparison

BibTeX

@inproceedings{
      luo2026selfimproving,
      title={Self-Improving Loops for Visual Robotic Planning},
      author={Calvin Luo and Zilai Zeng and Mingxi Jia and Yilun Du and Chen Sun},
      booktitle={The Fourteenth International Conference on Learning Representations},
      year={2026}
    }