Running error for converting libtorch to C++

Hi, All

I met a running error when converting python software to C++
The error is:

“terminate called after throwing an instance of ‘c10::Error’
what(): Expected a Tensor of type Variable but found an undefined Tensor for argument #0 ‘self’
Exception raised from checked_cast_variable at …/torch/csrc/autograd/VariableTypeManual.cpp:39 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x58 (0x7f120c808268 in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libc10.so)
frame #1: + 0x2e6ad19 (0x7f1207d6cd19 in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libtorch_cpu.so)
frame #2: + 0x2603b6a (0x7f1207505b6a in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libtorch_cpu.so)
frame #3: + 0x80927e (0x7f120570b27e in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libtorch_cpu.so)
frame #4: + 0x11c0b4c (0x7f12060c2b4c in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libtorch_cpu.so)
frame #5: at::mean(at::Tensor const&, c10::ArrayRef, bool, c10::optionalc10::ScalarType) + 0xf2 (0x7f1205fe3912 in /sas3rd/conan/dev/mva-vbviya/LAXNO/pytorch/lib/libtorch_cpu.so)”

It happens at the line:
auto mom_gen = discriminator->forward(fake, true, cuda)[0];

The C++ code for discriminator is (and the discriminator is good when the option true is set to false) namely only has one output.
std::vectortorch::Tensor forward(torch::Tensor x1, bool feature = false, bool cuda = false) {
torch::Tensor x = x1.view({-1, input_dim});
torch::Tensor noise = torch::randn({x.size(0),x.size(1)}) * 0.3;
if (cuda)
noise = noise.cuda();
torch::Tensor x_f;
// x = x + torch::autograd::Variable(noise);//get rid of Variable
x=x+noise;
x.options().requires_grad(false);//some more work to do here
for (int64_t i = 0; i < 5; i++)
{
torch::nn::Module m;
auto a=layers[0+i]->as()->forward(x);
torch::Tensor x_f = torch::nn::functional::relu(a);
noise = torch::randn({x_f.size(0),x_f.size(1)}) * 0.5;
if (cuda)
noise = noise.cuda();

        x = x_f + torch::autograd::Variable(noise);
        x.options().requires_grad(false);
    }

    if (feature) {
        return {x_f, final->as<LinearWeightNorm>()->forward(x)};
    } else {
        return {final->as<LinearWeightNorm>()->forward(x)};
    }

}

};

And this is the discriminator software:

def forward(self, x, feature = False, cuda = False):
x = x.view(-1, self.input_dim)
noise = torch.randn(x.size()) * 0.3 if self.training else torch.Tensor([0])
if cuda:
noise = noise.cuda()
x = x + Variable(noise, requires_grad = False)
for i in range(len(self.layers)):
m = self.layers[i]
x_f = F.relu(m(x))
noise = torch.randn(x_f.size()) * 0.5 if self.training else torch.Tensor([0])
if cuda:
noise = noise.cuda()
x = (x_f + Variable(noise, requires_grad = False))
if feature:
return x_f, self.final(x)
return self.final(x)

Hi, It seems to me that your sample code is pretty old, it still use Variable. I believe you should use Tensor class directly instead of using Variable. Can you try.

Thanks a lot for the comments.

Much appreciated.