Train in GPU, test in CPU

How to train my model in GPU, but test in CPU?

net.eval()
net.cuda()
X = Variable(torch.from_numpy(x).float()).cuda()
Y = Variable(torch.from_numpy(y).long()).cuda()

If I remove .cuda() I got an error: input is not contigous.

I also have tried this, and still get the same error:

# net new have the same architecture with net
net_new.load_state_dict(net.state_dict())
net_new.eval()
X = Variable(torch.from_numpy(x).float())
Y = Variable(torch.from_numpy(y).long())
1 Like

Your problem is not related to testing on cpu, but to that input is not contiguous. net.cpu() should be sufficient to pulling the network to CPU. There are two possibilities:

  1. Your X or Y is not contiguous yet the first operation of your net expect them to be. .cuda() makes a contiguous CUDA tensor and copies from CPU so it was fine in training. Try using
X = Variable(torch.from_numpy(x).float().contiguous())
Y = Variable(torch.from_numpy(y).long().contiguous())
  1. Some CPU kernel requires some tensor to be contiguous while the corresponding GPU one doesn’t. If this is the case, then it is a bug and you should submit it.
1 Like

It works using .contiguous()! thank you

my solution become like this:

net.cpu()
net.eval()
X = Variable(torch.from_numpy(x).float().contiguous())
Y = Variable(torch.from_numpy(y).long().contiguous())
1 Like