本文主要是介绍log_prob (custom used in RL),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
def log_prob(self, value, pre_tanh_value=None):
"""
:param value: some value, x
:param pre_tanh_value: arctanh(x)
:return:
"""
if pre_tanh_value is None:
pre_tanh_value = self.atanh(value)
return self.normal.log_prob(pre_tanh_value) - torch.log(
- value * value + self.epsilon
)
###################################################################
def forward(self, obs, reparameterize=True, return_log_prob=True):
log_prob = None
tanh_normal = self.actor(obs,reparameterize=reparameterize,)
if return_log_prob:
if reparameterize is True:
action, pre_tanh_value = tanh_normal.rsample(
return_pretanh_value=True
)
else:
action, pre_tanh_value = tanh_normal.sample(
return_pretanh_value=True
)
log_prob = tanh_normal.log_prob(
action,
pre_tanh_value=pre_tanh_value
)
log_prob = log_prob.sum(dim=1, keepdim=True) # get the entropy of the actions
else:
if reparameterize is True:
action = tanh_normal.rsample()
else:
action = tanh_normal.sample()
return action, log_prob
from:offlinerl/neorl
这篇关于log_prob (custom used in RL)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!