for i in range(train_iter):
out = model(*inputs)
loss = criterion(out, h_train) # loss between predicted and truth
optimizerADAM.zero_grad()
loss.backward()
optimizerADAM.step()
model.apply(constraints) # keep wieghts positive after gradient decent
loss_values.append(loss.item())
I would guess that you have something in your constraints function which is causing it to print out. That’s the most likely to me at least, everything else is pretty standard.
For example this will print out on apply:
import torch
@torch.no_grad()
def init_weights(m):
print(m)
if type(m) == nn.Linear:
m.weight.fill_(1.0)
# print(m.weight)
net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
n = net.apply(init_weights);
which is similar to what you are seeing. You would just need to get rid of the print statement and you should be good. Of course if anything in those other methods has been modified it could they could be the culprit as well.
It’s not the apply function itself, it’s probably the constraints function which you are passing into model.apply(), but without seeing it I can’t know for sure. For example, in my code above the init_weights function is being passed into apply, you should have a similar function called constraints which is getting passed into apply. You can see more information on how this works at https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.apply
This is my constrains func, simply a weight clipper
class WeightClipper(object):
def __init__(self, lower=-inf, upper=inf):
self.lower = lower
self.upper = upper
def __call__(self, module):
# filter the variables to get the ones you want
if hasattr(module, "weight"):
w = module.weight.data
w = w.clamp(min=self.lower, max=self.upper)
module.weight.data = w
If it’s not in there then I’m not sure where it’s coming from. You can troubleshoot it by commenting out all lines and then uncommenting one at a time to see what causes it.