RuntimeError: expected scalar type Half but found Float

import torch
import torch.nn as nn
print(torch.__version__)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)

class Net(nn.Module):
  def __init__(self,):
    super(Net, self).__init__()
    self.rnn = nn.LSTMCell(10,20)

  @torch.cuda.amp.autocast()
  def forward(self, input):
    hx, cx = self.rnn(input)
    return hx

model = Net().to(device)
input = torch.randn(3, 10).to(device)
scaler = torch.cuda.amp.GradScaler()
with torch.cuda.amp.autocast():
  output = model(input)
print(output.size())

error

import torch
import torch.nn as nn
print(torch.__version__)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)

class Net(nn.Module):
  def __init__(self,):
    super(Net, self).__init__()
    self.rnn = nn.Linear(10,20)

  @torch.cuda.amp.autocast()
  def forward(self, input):
    hx = self.rnn(input)
    return hx

model = Net().to(device)
input = torch.randn(3, 10).to(device)
# scaler = torch.cuda.amp.GradScaler()
with torch.cuda.amp.autocast():
  output = model(input)
print(output.size())

How to fix the first piece of code

Is there a boss to solve this problem?

CC @mcarilli to have a look at this, as it’s reproducible in 1.7.0.dev20200728.

Thanks, I will send the message

It appears LSTMCell and GRUCell have dedicated autograd operations that type-check inputs. I should add those to the FP16 cast list.

Filed issue https://github.com/pytorch/pytorch/issues/42605. Should be a straightforward fix, thanks for reporting this. I thought the cells were implemented in terms of autograd-exposed primitive ops (eg. gemms) that autocast already covers.

1 Like

Thank you for your answer