Skip to content
This repository has been archived by the owner on Nov 1, 2021. It is now read-only.

Autograd UPDATES weight and bias that DOESN'T CONTRIBUTE to the output of the module! #151

Open
dlmacedo opened this issue Aug 28, 2016 · 1 comment

Comments

@dlmacedo
Copy link

Dear Friends,

I am using the following code:

function units.agSReLU(input, weight, bias)
  local y = 0*weight+0*bias
  local output = torch.mul(torch.abs(input)+input, 0.5)
  return output
end

I am calling the above function from:

local autogradFunc = autograd.nn.AutoModule('AutoGradSReLU')
    (units.agSReLU, initialWeight:clone(), initialBias:clone())
model:add(autogradFunc)

And the autograd is UPDATING the weight and bias!

Could anybody please axpleaning me what is going on?

David

@fmassa
Copy link

fmassa commented Oct 23, 2016

Probably due to the optimization that you are using. If you are using 'weightDecay` (also known as L2 regularization) in SGD, it adds a contribution to the gradients which are a function of the weights.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants