Policy gradient methods rely on a baseline to measure the relative advantage of an action, ensuring the model reinforces behaviors that outperform its current average capability. In the training of Large Language Models (LLMs) using Actor-Critic methods (e.g., PPO), this baseline is typically estimated by a Value Model (Critic) often as large as the policy model itself. However, as the policy continuously evolves, the value model requires expensive, synchronous incremental training to accurately track the shifting capabilities of the policy. To avoid this overhead, Group Relative Policy Optimization (GRPO) eliminates the coupled value model by using the average reward of a group of rollouts as the baseline; yet, this approach necessitates extensive sampling to maintain estimation stability. In this paper, we propose V0, a Generalist Value Model capable of estimating the expected performance of any model on unseen prompts without requiring parameter updates. We reframe value estimation by treating the policy's dynamic capability as an explicit context input; specifically, we leverage a history of instruction-performance pairs to dynamically profile the model, departing from the traditional paradigm that relies on parameter fitting to perceive capability shifts. Focusing on value estimation at State Zero (i.e., the initial prompt, hence V0), our model serves as a critical resource scheduler. During GRPO training, V0 predicts success rates prior to rollout, allowing for efficient sampling budget allocation; during deployment, it functions as a router, dispatching instructions to the most cost-effective and suitable model. Empirical results demonstrate that V0 significantly outperforms heuristic budget allocation and achieves a Pareto-optimal trade-off between performance and cost in LLM routing tasks.
The V0 Architecture. A Semantic Backbone extracts embedding h, which the Residual Query Adapter projects into structured features using queries Qstatic and dynamic ΔQ. After obtaining context Cπ and query x, they are fed into the TabPFN inference head.
@article{generalist_value_model_v0,
author = {Yi{-}Kai Zhang and
Zhiyuan Yao and
Hongyan Hao and
Yueqing Sun and
Qi Gu and
Hui Su and
Xunliang Cai and
De{-}Chuan Zhan and
Han{-}Jia Ye},
title = {$V_0$: A Generalist Value Model for Any Policy at State Zero},
journal = {CoRR},
volume = {abs/2602.03584},
year = {2026}
}