Network architicture keep printed during training

using Jupyter notebook, the model architecture keep printed during training, how can I stop it ?

example output:

(0): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)
)
(1): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)
)
(2): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)
)
(3): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)
)
(4): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)
)
(5): FrNMFLayer(
(fc1): Linear(in_features=39, out_features=39, bias=False)
(fc2): Linear(in_features=96, out_features=39, bias=False)

and so on …

Can you please post your training loop?

for i in range(train_iter):
    out = model(*inputs)
    loss = criterion(out, h_train)  # loss between predicted and truth

    optimizerADAM.zero_grad()
    loss.backward()
    optimizerADAM.step()

    model.apply(constraints)  # keep wieghts positive after gradient decent
    loss_values.append(loss.item())

I would guess that you have something in your constraints function which is causing it to print out. That’s the most likely to me at least, everything else is pretty standard.
For example this will print out on apply:

import torch

@torch.no_grad()
def init_weights(m):
  print(m)
  if type(m) == nn.Linear:
    m.weight.fill_(1.0)
    # print(m.weight)
net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))
n = net.apply(init_weights);

will output

Linear(in_features=2, out_features=2, bias=True)
Linear(in_features=2, out_features=2, bias=True)
Sequential(
  (0): Linear(in_features=2, out_features=2, bias=True)
  (1): Linear(in_features=2, out_features=2, bias=True)
)

which is similar to what you are seeing. You would just need to get rid of the print statement and you should be good. Of course if anything in those other methods has been modified it could they could be the culprit as well.

1 Like

OK, so the apply function what causes the printing, how this can be turned off ?

It’s not the apply function itself, it’s probably the constraints function which you are passing into model.apply(), but without seeing it I can’t know for sure. For example, in my code above the init_weights function is being passed into apply, you should have a similar function called constraints which is getting passed into apply. You can see more information on how this works at https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.apply

This is my constrains func, simply a weight clipper

class WeightClipper(object):
    def __init__(self, lower=-inf, upper=inf):
        self.lower = lower
        self.upper = upper

    def __call__(self, module):
        # filter the variables to get the ones you want
        if hasattr(module, "weight"):
            w = module.weight.data
            w = w.clamp(min=self.lower, max=self.upper)
            module.weight.data = w

If it’s not in there then I’m not sure where it’s coming from. You can troubleshoot it by commenting out all lines and then uncommenting one at a time to see what causes it.