Skip to content

Commit

Permalink
fix rixwew#21 add training to the F.dropout argument
Browse files Browse the repository at this point in the history
  • Loading branch information
rixwew committed Jul 27, 2020
1 parent acc6997 commit 8ac848d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions torchfm/layer.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,9 +194,9 @@ def forward(self, x):
inner_product = p * q
attn_scores = F.relu(self.attention(inner_product))
attn_scores = F.softmax(self.projection(attn_scores), dim=1)
attn_scores = F.dropout(attn_scores, p=self.dropouts[0])
attn_scores = F.dropout(attn_scores, p=self.dropouts[0], training=self.training)
attn_output = torch.sum(attn_scores * inner_product, dim=1)
attn_output = F.dropout(attn_output, p=self.dropouts[1])
attn_output = F.dropout(attn_output, p=self.dropouts[1], training=self.training)
return self.fc(attn_output)


Expand Down

0 comments on commit 8ac848d

Please sign in to comment.