I am implementing a really basic autoencoder in pytorch.
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim as optim
from torch.autograd import Variable
BATCH_SIZE = 16
criterion_mse = nn.MSELoss().cuda()
x = Variable(torch.FloatTensor( BATCH_SIZE , 10 ) ).cuda()
l = nn.Linear( 10 , 10 ).cuda()
y = l(x)
loss = criterion_mse( x , y )
But this code gives the following error.
AssertionError Traceback (most recent call last)
<ipython-input-2-386981b1292e> in <module>()
14 l = nn.Linear( 10 , 10 ).cuda()
15 y = l(x)
---> 16 loss = criterion_mse( x , y )
/usr/local/lib/python2.7/dist-packages/torch/nn/modules/loss.pyc in _assert_no_grad(variable)
9 def _assert_no_grad(variable):
10 assert not variable.requires_grad, \
---> 11 "nn criterions don't compute the gradient w.r.t. targets - please " \
12 "mark these variables as volatile or not requiring gradients"
13
AssertionError: nn criterions don't compute the gradient w.r.t. targets - please mark these variables as volatile or not requiring gradients
The equivalent code works fine on TensorFlow.