I’m trying to use MSE loss on a batch the following way:
My CNN’s output is a vector of 32 samples. So, for example, if my batch size is 4, I’ll have an output of 4X32 samples. Each output vector needs to be loss- calculated with another vector. Then, I want to take each vector and apply on it backward function and so on.
the code as it is right now:
loss_criterion = nn.MSELoss(reduction= 'none')
...
...
for batch_idx, data in enumerate(training_loader, 0):
optimizerS.zero_grad()
simulated_spec = netS(data['image_tensor'], batch_size)
S_loss = loss_criterion(data['metadata_tensor'], simulated_spec)
S_loss.backward()
optimizerS.step()
S_loss is now a tensor of (batch size, 32) , and I get a run time error:
raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
how can it be solved and how do i get the MSE loss per every 32 samples?