Different User Warnings

I am running a code and I have these warnings

/utils.py:269: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
  init.kaiming_normal(m.weight.data, a=0, mode='fan_in')
/utils.py:277: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
  init.constant(m.bias.data, 0.0)
/utils.py:265: UserWarning: nn.init.normal is now deprecated in favor of nn.init.normal_.
  init.normal(m.weight.data, 0.0, 0.02)

will they affect any change if I didn’t change to the new function’s name ?

Second question:
UserWarning: volatile was removed and now has no effect. Usewith torch.no_grad():instead

if I didn’t change too, will this cause any effect ?

I am asking because I previously ignored these warnings before, and my network already finished training with some results, so I just recognized that I have these warnings, so my question is, if I changed these stuff, will the results differ if I run my network again ?

If you change those for the new ones the results shouldn’t change.

The first deprecated warnings still work but they won’t in future releases, so you better use their equivalent in-place functions.

The second error about volatile doesn’t affect the training phase. Basically in the validation and testing phases all the computational graphs are being calculated and stored because volatile doesn’t do anything anymore. If you use torch.no_grad your model will use less memory on those phases.

For the second warning
this was an example

train_display_images_a = Variable(torch.stack([train_loader_a.dataset[i] for i in range(display_size)]).cuda(), volatile=True)
train_display_images_b = Variable(torch.stack([train_loader_b.dataset[i] for i in range(display_size)]).cuda(), volatile=True)
test_display_images_a = Variable(torch.stack([test_loader_a.dataset[i] for i in range(display_size)]).cuda(), volatile=True)
test_display_images_b = Variable(torch.stack([test_loader_b.dataset[i] for i in range(display_size)]).cuda(), volatile=True)

I changed it to:

with torch.no_grad():
    train_display_images_a = Variable(torch.stack([train_loader_a.dataset[i] for i in range(display_size)]).cuda())
    train_display_images_b = Variable(torch.stack([train_loader_b.dataset[i] for i in range(display_size)]).cuda())
    test_display_images_a = Variable(torch.stack([test_loader_a.dataset[i] for i in range(display_size)]).cuda())
    test_display_images_b = Variable(torch.stack([test_loader_b.dataset[i] for i in range(display_size)]).cuda())

however, I don’t know what to do here:

        x_a.volatile = True
        x_b.volatile = True

how do I use torch.no_grad() here

When you use the with torch.no_grad() statement, everything written inside it will not require any grads, so you just don’t have to use volatile at all.

There are a bunch of old stuff in that code that you should check. For example, you should not use Variable at all, and the .cuda() function I think it should be replaced by a .to(device) call.

Check out the migration guidelines for pytorch 0.4.0

yes because actually this is not my code, it is a cloned repo from github
but if I didn’t edit anything in the code, or replaced any stuff, there should still be no problem, right ?