V0: A Generalist Value Model for Any Policy at State Zero

Yi-Kai Zhang1,3 Zhiyuan Yao2,3 Hongyan Hao3 Yueqing Sun3
Qi Gu3 Hui Su3 Xunliang Cai3 De-Chuan Zhan1 Han-Jia Ye1
1Nanjing University 2Zhejiang University 3Meituan, China
Function: V0 uses a model's historical performance to predict how it will perform on unseen instructions without running the model itself.
Motivation

Abstract

Policy gradient methods rely on a baseline to measure the relative advantage of an action, ensuring the model reinforces behaviors that outperform its current average capability. In the training of Large Language Models (LLMs) using Actor-Critic methods (e.g., PPO), this baseline is typically estimated by a Value Model (Critic) often as large as the policy model itself. However, as the policy continuously evolves, the value model requires expensive, synchronous incremental training to accurately track the shifting capabilities of the policy. To avoid this overhead, Group Relative Policy Optimization (GRPO) eliminates the coupled value model by using the average reward of a group of rollouts as the baseline; yet, this approach necessitates extensive sampling to maintain estimation stability. In this paper, we propose V0, a Generalist Value Model capable of estimating the expected performance of any model on unseen prompts without requiring parameter updates. We reframe value estimation by treating the policy's dynamic capability as an explicit context input; specifically, we leverage a history of instruction-performance pairs to dynamically profile the model, departing from the traditional paradigm that relies on parameter fitting to perceive capability shifts. Focusing on value estimation at State Zero (i.e., the initial prompt, hence V0), our model serves as a critical resource scheduler. During GRPO training, V0 predicts success rates prior to rollout, allowing for efficient sampling budget allocation; during deployment, it functions as a router, dispatching instructions to the most cost-effective and suitable model. Empirical results demonstrate that V0 significantly outperforms heuristic budget allocation and achieves a Pareto-optimal trade-off between performance and cost in LLM routing tasks.

Method Overview

Method Pipeline

The V0 Architecture. A Semantic Backbone extracts embedding h, which the Residual Query Adapter projects into structured features using queries Qstatic and dynamic ΔQ. After obtaining context Cπ and query x, they are fed into the TabPFN inference head.

Citation

@article{generalist_value_model_v0,
  author       = {Yi{-}Kai Zhang and
                  Zhiyuan Yao and
                  Hongyan Hao and
                  Yueqing Sun and
                  Qi Gu and
                  Hui Su and
                  Xunliang Cai and
                  De{-}Chuan Zhan and
                  Han{-}Jia Ye},
  title        = {$V_0$: A Generalist Value Model for Any Policy at State Zero},
  journal      = {CoRR},
  volume       = {abs/2602.03584},
  year         = {2026}
}