Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于书中DDPG算法的疑问 #146

Open
yxz777 opened this issue Nov 7, 2023 · 1 comment
Open

关于书中DDPG算法的疑问 #146

yxz777 opened this issue Nov 7, 2023 · 1 comment

Comments

@yxz777
Copy link

yxz777 commented Nov 7, 2023

在运行书中DDPG的参考代码时,观察到随着训练的进行,actorloss 的值在不断的上升,criticloss的值也是上下飘忽不定
actorloss 的定义不是-q值吗?现在这个值越来越大不就意味着动作的q值越来越小吗?这不是与我们想最大化动作的q值的目的相反吗?
所以评价这个算法的好坏最终是要看他的奖励是否上升吗?

@severus98
Copy link

看奖励情况。actor的loss的绝对情况应该没什么意义吧,本身他的loss其实就是critic的输出,而critic网络又是不断在变的,可能最开始对于Q值估计误差偏大,后面逐渐修正。总之感觉强化学习的训练和其他深度学习任务还不太一样,最终目标还是看奖励是否收敛稳定

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants