How to use CrossEntropyLoss()

Good morning.
I’m using a Variational Autoencoder with this loss function:

def loss_function(self, x_hat, x, mu, logvar, β=1):
	# Reconstruction + β * KL divergence losses summed over all elements and batch
	loss = nn.CrossEntropyLoss()
	CE = loss(x_hat, x)
	KLD = 0.5 * torch.sum(logvar.exp() - logvar - 1 + mu.pow(2))
	return CE + β * KLD

x is my input image of [batch_size, 1, 16, 16] size. When I start the training phase, it returns this error:

RuntimeError: Expected object of scalar type Long but got scalar type Float for argument #2 'target' in call to _thnn_nll_loss2d_forward

So I edit in this way: CE = loss(x_hat, x.long()), but returns this error:

RuntimeError: 1only batches of spatial targets supported (3D tensors) but got targets of size: : [256, 1, 16, 16]

How can I use cross entropy correctly?

what is x_hat? The second argument of CE loss should be the labels.

CrossEntropyLoss expects arguments of certain shapes and value ranges. Search for “Shape:” in the documentation for this class, for a precise description of what this criterion expects.