Measuring loss and error, what is torch.ne? (Solved. Now has decoder design in discussion)

I am experimenting with [https://github.com/gpleiss/efficient_densenet_pytorch/blob/master/demo.py](http://memory efficient densenets)

When calculating the accuracy and error they use:

measure accuracy and record loss

batch_size = target.size(0)
_, pred = output.data.cpu().topk(1, dim=1)
error.update(torch.ne(pred.squeeze(), target.cpu()).float().sum() / batch_size, batch_size)
losses.update(loss.item(), batch_size)

error is a custom class earlier which keeps track of errors:

def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count

In particular, what is the error.update line doing? I cannot find torch.ne in the documentation and I’m getting errors when I use this. (I am using my own data and my datatypes do not match up: RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'other')

error.update updates the AverageMeter for the prediction errors, i.e. it’s keeping track of the average error, the sum of all falsely predicted samples, etc.
torch.ne performs the “not equal” operation. You can find the docs here.

The error points probably to the dtypes of the argument to torch.ne.
Most likely pred is a torch.LongTensor, while you cast target to float.
Try to remove the .float() cast or cast `pred to float as well.

Thanks @ptrblck. I have a separate question for you.

For an EEG time series input I’m using a 1D CNN to look for N seizure events. Is there a built in pytorch function to output regressive events (to indicate seizure intensity on a continuous spectrum [0, inf])? This seems a bit niche. My first hunch is to look into making a decoder which takes the CNN feature map and then decodes the N events until it reaches a stopping condition. (Like a seq2seq model)

Do you have a dataset containing the seizure intensities in [0, inf]?
If so, it seems like a regression use case for me, i.e. you could use a linear layer (maybe with a relu at the end) and something like nn.MSELoss as your criterion.

Could you explain the stopping condition a bit?

The intensities are continuous. They have no upper bound. There are also no 0 intensity seizures. (I guess that would mean no seizure so its not included)

I imagine the stopping condition like a seq2seq decoder.

  1. You feed the time series into the CNN. It produces the context/feature vector.
  2. A decoder (LSTM) treats this like a NLP problem. e.g. Section 2.2 in Pointer-generator network You have a which doesn’t have any significance except to init the decoder. You then decode by timesteps, “event by event”, using the feature/context vector and the previous decoder input until you reach a stopping condition. I’m not sure of how the stopping condition is usually handled but I assume its similar in most seq2seq models. Instead of the decoder running a software over the probability distribution of a vocabulary, it would instead use a MSELoss criterion. This way it could produce N number of seizure events until it believes that it has output all the seizure events from the EEG data.

Again, not sure what, if any, of this is out of the box in PyTorch.