Skip to content
Snippets Groups Projects
Commit 365191cf authored by Frisinghelli Daniel's avatar Frisinghelli Daniel
Browse files

Use sigmoid activation for p; optimize logarithms of scale, shape.

parent 71eb0978
No related branches found
No related tags found
No related merge requests found
...@@ -70,11 +70,11 @@ class BernoulliGammaLoss(NaNLoss): ...@@ -70,11 +70,11 @@ class BernoulliGammaLoss(NaNLoss):
# parameters: ensure numerical stability # parameters: ensure numerical stability
# clip probabilities to (0, 1) # clip probabilities to (0, 1)
p_pred = F.sigmoid(y_pred[:, 0, ...]) p_pred = torch.sigmoid(y_pred[:, 0, ...])
# clip shape and scale to (0, +infinity) # clip shape and scale to (0, +infinity)
gshape = torch.exp(y_pred[:, 1, ...].clamp(min=self.epsilon)) gshape = torch.exp(y_pred[:, 1, ...])
gscale = torch.exp(y_pred[:, 2, ...].clamp(min=self.epsilon)) gscale = torch.exp(y_pred[:, 2, ...])
# negative log-likelihood function of Bernoulli-Gamma distribution # negative log-likelihood function of Bernoulli-Gamma distribution
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment