Thank for your great work!
I am wondering that whether the reward based on confidence would cause models to prematurely narrow exploration,
maybe resulting in confident but short answers in some domains.
Have you explored methods to mitigate this Exploitation/Exploration trade-off to encourage deeper thought?