@harsha_g

I changed class like this to pass image into a transform function.

class DatasetProcessing(Dataset):

```
#initialise the class variables - transform, data, target
def __init__(self, data, target, transform=None):
#self.transform = transform
print("Before transformation")
print(data.size())
print(data[0].size())
outputs = []
datalen = data.size()[0]
for i in range(datalen):
tensor = transform(data[i,:,:,:]) #transform
outputs.append(tensor)
result = torch.cat(outputs, dim=1) #shape (64, 32*in_channels, 224, 224)
print("After transformation")
print(result.size())
data = result
self.data = data.view(1, -1)
# converting target to torch.LongTensor dtype
self.target = target
#retrieve the X and y index value and return it
def __getitem__(self, index):
return (self.data[index], self.target[index])
return (self.transform(self.data[index]), self.target[index])
#returns the length of the data
def __len__(self):
return len(list(self.data))
```

Before transformation

torch.Size([1316, 224, 224, 3])

torch.Size([224, 224, 3])

After transformation

torch.Size([3, 294784, 3])

Errror Message:

RuntimeError: size mismatch, m1: [3 x 221760], m2: [150528 x 1536] at /opt/conda/conda-bld/pytorch_1587428190859/work/aten/src/TH/generic/THTensorMath.cpp:41

Output from validation output: (Actually the output has to be showing 32 batches with 32 labels but am not sure why is it throwing belwo output

****IMAGES**

torch.Size([3, 73920, 3])

****LABEL**

tensor([0., 0., 0.])