Result type Float can't be cast to the desired output type Byte

Hi, I am trying to make a model to classify knee MRIs. I am using the torchio library and I have succesfully train a classifier to the raw data. Now, I am adding masks to the MRI object and I want to pass them to a model. I use the same code but now I am getting this error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-56-2a7d5ddf84fc> in <module>()
     26 for epoch in range(1, num_epochs+1):
     27 
---> 28     train_loss, train_accuracy = training(epoch,model,train_loader,optimizer,criterion)
     29     valid_loss, valid_accuracy = validation(epoch,model,valid_loader,criterion)
     30 

6 frames
<ipython-input-55-59a55a26f2b7> in training(epoch, model, train_loader, optimizer, criterion)
      8   model.train()
      9 
---> 10   for i, batch in tqdm(enumerate(train_loader,0)):
     11     images = batch['t1'][tio.DATA].cuda()
     12     labels = batch['label']

/usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in __iter__(self)
    255     def __iter__(self):
    256         try:
--> 257             for obj in super(tqdm_notebook, self).__iter__():
    258                 # return super(tqdm...) will not catch exception
    259                 yield obj

/usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self)
   1193 
   1194         try:
-> 1195             for obj in iterable:
   1196                 yield obj
   1197                 # Update and possibly print the progressbar.

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
    519             if self._sampler_iter is None:
    520                 self._reset()
--> 521             data = self._next_data()
    522             self._num_yielded += 1
    523             if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
   1201             else:
   1202                 del self._task_info[idx]
-> 1203                 return self._process_data(data)
   1204 
   1205     def _try_put_index(self):

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
   1227         self._try_put_index()
   1228         if isinstance(data, ExceptionWrapper):
-> 1229             data.reraise()
   1230         return data
   1231 

/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
    432             # instantiate since we don't know how to
    433             raise RuntimeError(msg) from None
--> 434         raise exception
    435 
    436 

RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
    return self.collate_fn(data)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp>
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp>
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, in default_collate
    return torch.stack(batch, 0, out=out)
**RuntimeError: result type Float can't be cast to the desired output type Byte**

The training() function is this:

def training(epoch, model, train_loader, optimizer, criterion):
  "Training over an epoch"
  metric_monitor = MetricMonitor()
  model.train()

  for i, batch in tqdm(enumerate(train_loader,0)):
    images = batch['t1'][tio.DATA].cuda()
    labels = batch['label'].cuda()
    if images.sum() != 0:
      output = F.softmax(model(images), dim =1)
  
      loss = criterion(output, labels)

      output = output.data.max(dim=1,keepdim=True)[1]
      output = output.view(-1)
      
      acc = calculate_acc(output, labels)
      
      metric_monitor.update("Loss", loss.item())
      metric_monitor.update("Accuracy", acc)

      optimizer.zero_grad()

      loss.backward()
      optimizer.step()
  print("[Epoch: {epoch:03d}] Train      | {metric_monitor}".format(epoch=epoch, metric_monitor=metric_monitor))
  return metric_monitor.metrics['Loss']['avg'], metric_monitor.metrics['Accuracy']['avg']

When I run the code some batches pass but it stops on some batches and produces this error. I have checked my data and all of them are torch.FloatTensor
The labels are integers.

Hi

This is an old post but answering in case others run into an issue as I just did.

Your issue may be how you are adding mask to the MRI object. The return torch.stack(batch, 0, out=out) is returning this error as it cannot stack two items in your dataloader due to their type being different.

I had this same issue where the torchio.RandomAffine function was transforming some tensors into torch.FloatTensor while the non-transformed tensors were torch.ByteTensor.

How did you fix this issue? It is indeed the masks that are causing me to have this error but I’ve checked that they are all of the same type. How do I proceed?