Can you create a CVAE on the MNIST dataset in PyTorch without using Cuda?

I have Windows 10 and an Intel GPU but am unable to use Cuda without an Nvidia GPU. I was trying to run a CVAE on the MNIST dataset with use_cuda=True but when model.cuda() is called I get the error: AssertionError: Torch not compiled with CUDA enabled If I set use_cuda to False I get the following error: optimizer = optim.Adam(model.parameters(), lr=1e-3) TypeError: ‘collections.OrderedDict’ object is not callable I wanted to know if there was a way to solve the above error(s), and if not, if it would be possible to still create CVAE on the MNIST dataset without using Cuda. I would appreciate any help to do this, and if anyone knew of any tutorials which create a CVAE on the MNIST Dataset in PyTorch without using Cuda. Thanks!

Can you post a minimal reproducible example of this error? (It helps with debugging :slight_smile: )

As you have an Intel GPU you can’t use CUDA. So, remove all references in your code to .cuda() etc.

The error you get for optim.Adam(model.parameters(), lr=1e-3) TypeError: 'collections.OrderedDict' object is not callable.

Can you check what both model and model.parameters (no brackets here) are? It seems that your model.parameters object is an OrderedDict itself whereas if I recalled correctly model.parameters() should return an OrderedDict object. So something is wrong with your model.

TL;DR - share a minimal reproducible error of this error so we can debug it.

Thanks for the response. I am unsure what you mean by a minimal reproducible example of this error, but this is the code I found online that I am trying to run without Cuda. I hope this helps give more context. I also tried changing model.parameters() to model.parameters but then get the error “ValueError: optimizer got an empty parameter list”

from future import print_function
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import torch
import torch.utils.data
from torch import nn, optim
from collections import OrderedDict
from torch.autograd import Variable
from torch.nn import functional as F
from torchvision import datasets, transforms
from torchvision.utils import save_image
%matplotlib inline

use_cuda = False
batch_size = 32
latent_size = 20 # z dim

kwargs = {‘num_workers’: 1, ‘pin_memory’: True} if use_cuda else {}
train_loader = torch.utils.data.DataLoader(
datasets.MNIST(‘…/data’, train=True, download=True,
transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST(‘…/data’, train=False,
transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True, **kwargs)
def to_var(x):
x = Variable(x)
return x

def one_hot(labels, class_size):
targets = torch.zeros(labels.size(0), class_size)
for i, label in enumerate(labels):
targets[i, label] = 1
return to_var(targets)

def loss_function(recon_x, x, mu, logvar):
BCE = F.binary_cross_entropy(recon_x, x.view(-1, 28*28))
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
class CVAE(nn.Module):
def init(self, feature_size, latent_size, class_size):
super(CVAE, self).init()
self.feature_size = feature_size
self.class_size = class_size

    self.fc1  = nn.Linear(feature_size + class_size, 400)
    self.fc21 = nn.Linear(400, latent_size)
    self.fc22 = nn.Linear(400, latent_size)

    self.fc3 = nn.Linear(latent_size + class_size, 400)
    self.fc4 = nn.Linear(400, feature_size)

    self.relu = nn.ReLU()
    self.sigmoid = nn.Sigmoid()

def encode(self, x, c): 
    '''
    x: (bs, feature_size)
    c: (bs, class_size)
    '''
    inputs = torch.cat([x, c], 1) # (bs, feature_size+class_size)
    h1 = self.relu(self.fc1(inputs))
    z_mu = self.fc21(h1)
    z_var = self.fc22(h1)
    return z_mu, z_var
def reparametrize(self, mu, logvar):
    if self.training:
        std = logvar.mul(0.5).exp_()
        eps = Variable(std.data.new(std.size()).normal_())
        return eps.mul(std) + mu
    else:
        return mu

def decode(self, z, c): # P(x|z, c)
    '''
    z: (bs, latent_size)
    c: (bs, class_size)
    '''
    inputs = torch.cat([z, c], 1) # (bs, latent_size+class_size)
    h3 = self.relu(self.fc3(inputs))
    return self.sigmoid(self.fc4(h3))

def forward(self, x, c):
    mu, logvar = self.encode(x.view(-1, 28*28), c)
    z = self.reparametrize(mu, logvar)
    return self.decode(z, c), mu, logvar

def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, labels) in enumerate(train_loader):
data = to_var(data)
labels = one_hot(labels, 10)
recon_batch, mu, logvar = model(data, labels)
optimizer.zero_grad()
loss = loss_function(recon_batch, data, mu, logvar)
loss.backward()
train_loss += loss.data[0]
optimizer.step()
if batch_idx % 500 == 0:
print(‘Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}’.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.data[0] / len(data)))

def test(epoch):
model.eval()
test_loss = 0
for i, (data, labels) in enumerate(test_loader):
data = to_var(data)
labels = one_hot(labels, 10)
recon_batch, mu, logvar = model(data, labels)
test_loss += loss_function(recon_batch, data, mu, logvar).data[0]
if i == 0:
n = min(data.size(0), 8)
comparison = torch.cat([data[:n],
recon_batch.view(batch_size, 1, 28, 28)[:n]])
save_image(comparison.data.cpu(),
‘results/reconstruction_’ + str(epoch) + ‘.png’, nrow=n)

test_loss /= len(test_loader.dataset)
print('====> Test set loss: {:.4f}'.format(test_loss))

model = CVAE(28*28, latent_size, 10)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(1, 11):
train(epoch)

This should be super(CVAE, self).__init__()

Can you re-run it with this change and what errors you get?

I get the same error TypeError: ‘collections.OrderedDict object is not callable’ or “ValueError: optimizer got an empty parameter list” if model.parameters is used instead of model.parameters().

now replace model.parameters with model.parameters(). And keep super(CVAE, self).__init__() with the underscores.

I still get "TypeError: ‘collections.OrderedDict object is not callable’ after these edits.

On what line does this error appear? I need a bit more context on what line is causing the error.

Also, I see you’re using Variable and .data. I’d advise you don’t use them as Varaible is deprecated and .data messes with metadata in your tensors.

Secondly, can you edit your initial comment with the code and make sure it’s all indented properly? It’s pretty hard to read without the indentations! Make sure to wrap the entire code in three backticks ``` to properly highlight any code.

Also, can you change optimizer.zero_grad() and optimizer.step() to optim.zero_grad() and optim.step() respectively? You’re calling the base class instead of its instance.

optimizer = optim.Adam(model.parameters(), lr=1e-3)
This line causes the error ‘collections.OrderedDict’ object is not callable’.
I tried to fix the indentations in my initial comment but while they show up in the text box when I am typing they are not fixed in the actual message once I save the edit. I also changed optimizer.zero_grad and optimizer.step().
What would you use as an alternative for Variable and .data?

It’s because you have other ```'s within your comment. So it’s doing part of it within, and part of it without. Make sure there’s just one at the top and one at the bottom, and I don’t think tabs materialize so you’ll have to use spaces rather than tabs to get the right indentation.

Variable has been deprecated since 0.4 (IIRC) so you don’t need it at all. For .data you don’t need that either, if you’re trying to just keep track of a value (without needing to backprop) you can use .detach() instead.

When you updated init to __init__, did you do it for both init below? That could be the problem as you won’t initialize your model as an nn.Module object and hence model.parameters() won’t behave properly.

should be

def __init__(self, feature_size, latent_size, class_size):
  super(CVAE, self).__init__()