Portrait de Romina Abachi n'est pas disponible

Romina Abachi

Alumni

Publications

Calibrated Value-Aware Model Learning with Probabilistic Environment Models
Claas Voelcker
Anastasiia Pedan
Arash Ahmadian
Igor Gilitschenski
The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcem… (voir plus)ent learning. The MuZero loss, which penalizes a model’s value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.
Calibrated Value-Aware Model Learning with Probabilistic Environment Models
Claas Voelcker
Anastasiia Pedan
Arash Ahmadian
Igor Gilitschenski
The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcem… (voir plus)ent learning. The MuZero loss, which penalizes a model's value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.
Calibrated Value-Aware Model Learning with Probabilistic Environment Models
Claas Voelcker
Anastasiia Pedan
Arash Ahmadian
Igor Gilitschenski
The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcem… (voir plus)ent learning. The MuZero loss, which penalizes a model's value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.
Calibrated Value-Aware Model Learning with Stochastic Environment Models
Claas Voelcker
Anastasiia Pedan
Arash Ahmadian
Igor Gilitschenski
The idea of value-aware model learning, that models should produce accurate value estimates, has gained prominence in model-based reinforcem… (voir plus)ent learning. The MuZero loss, which penalizes a model's value function prediction compared to the ground-truth value function, has been utilized in several prominent empirical works in the literature. However, theoretical investigation into its strengths and weaknesses is limited. In this paper, we analyze the family of value-aware model learning losses, which includes the popular MuZero loss. We show that these losses, as normally used, are uncalibrated surrogate losses, which means that they do not always recover the correct model and value function. Building on this insight, we propose corrections to solve this issue. Furthermore, we investigate the interplay between the loss calibration, latent model architectures, and auxiliary losses that are commonly employed when training MuZero-style agents. We show that while deterministic models can be sufficient to predict accurate values, learning calibrated stochastic models is still advantageous.