How to use batch size with CrossEntropyLoss()

I am using this code to load image and add date stamp that I want to remove.

import torchvision.transforms as transforms

transform = transforms.Compose(
[transforms.ToTensor()])

class ImageL(Dataset):
def init(self,folder,width,height,transform):
self.folder = folder
self.transform = transform
self.images = glob.glob(os.path.join(folder,‘jpg’,’’,’.jpg’))
self.toal_imgs = natsorted(self.images)

    self.ts = time.time()
    self.ts = datetime.datetime.fromtimestamp(self.ts).strftime('%d-%m-%Y_%H-%M-%S')
    self.width = width
    self.heigth_im = height
def __len__(self):
    return len(self.toal_imgs)

def __getitem__(self, i):
    st = self.ts
    width = self.width
    height = self.heigth_im
    font = cv2.FONT_HERSHEY_SIMPLEX

    img_loc = self.toal_imgs[i]
    img = cv2.imread(img_loc)

    img_noise = cv2.putText(img, st, (10, 500), font, 1, (255, 255, 255), 2)
    img_noise = cv2.resize(img_noise, (width, height))
    #img_noise = img_noise.astype('float32') / 255

    img = cv2.resize(img, (width, height))

    #img = img.astype('float32') / 255

    imp = np.asarray(img)
    img_noise = np.asarray(img_noise)

    #img = np.moveaxis(img,2,0)
    #img_noise = np.moveaxis(img_noise,2,0)

    img = self.transform(img)
    img_noise = self.transform(img_noise)
    return (img_noise,img)

batch_size = 32
width = 48
height = 48
my_dataset = ImageL(’/Users/knutjorgenbjuland/PycharmProjects/autoencoder’,width,height,transform)
trainset = data.DataLoader(my_dataset , batch_size=batch_size, shuffle=False,
num_workers=4, drop_last=True)

I used this as source for my code, https://medium.com/@garimanishad/reconstruct-corrupted-data-using-denoising-autoencoder-python-code-aeaff4b0958e

However when I am using loss = criterion(outputs.,labels)

I get this error,

RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4

nn.CrossEntropyLoss expects model outputs with a class dimension as [batch_size, nb_classes, *additional_dims], while the target should not contain this class dimension but instead [batch_size, *additional_dims] and its values should contain the class indices in the range [0, nb_classes-1] as described in the docs.

You are not supposed to set a batch size for any layer or criterion in PyTorch and I guess your target might be one-hot encoded and has thus the additional unwanted dimension.
If that’s the case, use target = torch.argmax(target, dim=1) to create the target tensor in its expected shape.

PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. :wink: