While I a am training the Network, Getting TypeError: "'tuple' object is not callable" for the 'for' loop line of network training code

while I a am training the Network, Getting TypeError: “‘tuple’ object is not callable” for the ‘for’ loop line of network training code. Attached the concatenated traindataset, its trainloader, and code for training network. The same code worked for non-concatenated train dataset. Not sure what is the issue. Any help would be greatly appreciated.

#Concatenate the 5 different Training set into 1 new training set

cifar_trainset_new = torch.utils.data.ConcatDataset([cifar_trainset, cifar_trainset2, cifar_trainset3, cifar_trainset4, cifar_trainset5])

cifar_trainloader_new = torch.utils.data.DataLoader(cifar_trainset_new, batch_size=4, shuffle=True)

cifar_trainset_new_size = len(cifar_trainset_new)

print(cifar_trainset_new_size)

Use the augmented training images to train the MLP with the best performance and report the new accuracy performance

MLP with best performance is MLP2 which has Accuracy of 41 %

mlp3 = MLP2().to(device) # operate on GPU

Define a loss function and optimizer

criterion3 = nn.CrossEntropyLoss()

optimizer3 = optim.SGD(mlp2.parameters(), lr=0.001, momentum=0.9)

Training the Network

n_epoch3 = 20

for epoch3 in range(n_epoch3): # loop over the dataset multiple times

running_loss3 = 0.0

for i, data in enumerate(cifar_trainloader_new, 0):

# TODO: write training code

# get the inputs

inputs = data

labels = data

inputs = inputs.to(device)

labels = labels.to(device)



# zero the parameter gradients

optimizer3.zero_grad()

# forward + backward + optimize

output3 = mlp2(inputs)

loss3 = criterion3(output3, labels)

loss3.backward()

optimizer3.step()



# print statistics

running_loss3 += loss3.item()

if i % 2000 == 1999:    # print every 2000 mini-batches

    print('[%d, %5d] loss2: %.3f' %(epoch3 + 1, i + 1, running_loss3 / 2000))

    running_loss3 = 0.0

print(‘Finished Training3 with new Augmented training images’)

Save the trained model

PATH = ‘./mlp2_cifar10.pth’

torch.save(mlp3.state_dict(), PATH)

#Reloading the model

mlp3 = MLP2().to(device)

mlp3.load_state_dict(torch.load(PATH))

Evaluate the classfication performance on the testing set

correct3 = 0

total3 = 0

with torch.no_grad():

for data in cifar_testloader:

# TODO: write testing code

images , labels = data

images = images.to(device)

labels = labels.to(device)

output3 = mlp3(images)

_, predicted3 = torch.max(output3.data, 1)

total3 += labels.size(0)

correct3 += (predicted3 == labels).sum().item()

print(‘Accuracy of the network on the 10000 test images: %d %%’ % (100 * correct3 / total3))

====== Error =======

TypeError Traceback (most recent call last)
in ()
14 for epoch3 in range(n_epoch3): # loop over the dataset multiple times
15 running_loss3 = 0.0
—> 16 for i, data in enumerate(cifar_trainloader_new, 0):
17 # TODO: write training code
18 # get the inputs

7 frames
in call(self, tensor)
26
27 def call(self, tensor):
—> 28 return tensor + torch.randn(tensor.size()) * self.std + self.mean
29
30 def repr(self):

TypeError: ‘tuple’ object is not callable

Could you post the used transformation and how it’s applied to the datasets? Based on the stack trace I assume you are using the noise addition from this post?

shairng the Transformation I applied in the dataset here for ref.

Data augmentation techniques to the training set:

#original training set:

transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

# Shifting up/down and left/right by within 10%:

transform2 = transforms.Compose([transforms.RandomAffine(degrees=0, translate=(0.1,0.1), shear=0), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

#Rotating:

transform3 = transforms.Compose([transforms.RandomAffine(degrees=30, shear=0), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

#Horizontal Flipping:

transform4 = transforms.Compose([transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

#Adding small Gaussian noise:

class AddGaussianNoise(object):

    def __init__(self, mean=0., std=1.):

        self.std = std

        self.mean = mean

        

    def __call__(self, tensor):

        return tensor + torch.randn(tensor.size()) * self.std + self.mean

    

    def __repr__(self):

        return self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std)

transform5 = transforms.Compose([AddGaussianNoise(0.,1.), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

#transform6 = transforms.Compose([transforms.RandomAffine(degrees=0, translate=(0.1,0.1), shear=0), transforms.RandomAffine(degrees=30, shear=0), transforms.RandomHorizontalFlip(), AddGaussianNoise(0.,1.), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

# TODO: load the CIFAR-10 dataset and build dataloader

cifar_trainset = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform)

cifar_trainset2 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform2)

cifar_trainset3 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform3)

cifar_trainset4 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform4)

cifar_trainset5 = datasets.CIFAR10(root = './data', train=True, download=True, transform = transform5)

#Concatenate the 5 different Training set into 1 new training set

cifar_trainset_new = torch.utils.data.ConcatDataset([cifar_trainset, cifar_trainset2, cifar_trainset3, cifar_trainset4, cifar_trainset5])

cifar_trainloader_new = torch.utils.data.DataLoader(cifar_trainset_new, batch_size=4, shuffle=True)

cifar_trainset_new_size = len(cifar_trainset_new)

print(cifar_trainset_new_size)

#cifar_trainloader = torch.utils.data.DataLoader(cifar_trainset6, batch_size=4, shuffle=True)

#cifar_trainset6_size = len(cifar_trainset6)

#print(cifar_trainset6_size)

cifar_testset = datasets.CIFAR10(root = './data', train=False, download=True, transform = transform)

cifar_testloader = torch.utils.data.DataLoader(cifar_testset, batch_size=1, shuffle=False)

cifar_testset_size = len(cifar_testset)

print(cifar_testset_size)

print("CIFAR10 new training dataset:\n ", cifar_trainset_new)

print("CIFAR10 testing dataset:\n ", cifar_testset)

AddGaussianNoise should be applied on a tensor not a PIL.Image, so you would need to add this transformation after the ToTensor() transform:

transform5 = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), AddGaussianNoise(0.,1.)])

PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. I’ve formatted the code for you.

1 Like

Thanks @ptrblck. This change really helps. Thanks for this knowledge sharing.