AttributeError: 'tuple' object has no attribute 'log_softmax'

Hi,
Could you please help me to find why following error happens.
When I try to train model I get massage AttributeError: ‘tuple’ object has no attribute ‘log_softmax’
at this line:
loss = criterion(output, target)

And indeed, when I print output, I have tuple like this:
(tensor([[ 0.3656, -0.2767],

[ 0.3415, -0.4962]], grad_fn=)

My input is [16,100] when 16 is number of features, 100 is batch size

How can I get rid of tuple?
Below is how my model is defined.
Sorry for my night English.
Thank you very much in advance!

N_INPUTS = 16
N_NEURONS = 5
N_OUTPUTS = 2
n_layers=2
drop_prob=0.85

class ImageRNN(nn.Module):
    def __init__(self, batch_size, n_inputs, n_neurons,drop_prob, n_outputs,n_layers):
        super(ImageRNN, self).__init__() 
        self.n_neurons = n_neurons#hidden_size
        self.batch_size = batch_size
        self.n_inputs = n_inputs
        self.n_outputs = n_outputs
        self.n_layers=n_layers
        self.drop_prob = drop_prob
        self.lstm = nn.LSTM(self.n_inputs, self.n_neurons, self.n_layers, 
                        dropout=drop_prob, batch_first=False)
        self.dropout = nn.Dropout(drop_prob)
    
        self.FC = nn.Linear(self.n_neurons, self.n_outputs)
    

def init_hidden(self):
    weight = next(self.parameters()).data
    
    hidden = (weight.new(self.n_layers, self.batch_size, self.n_neurons).zero_(),
                  weight.new(self.n_layers, self.batch_size, self.n_neurons).zero_())
    
    return hidden


    
def forward(self, X):
    X = X.unsqueeze(dim=0)
    self.hidden = self.init_hidden()     
    r_output, hidden = self.lstm(X, self.hidden)
    out = self.dropout(r_output)
    out = self.FC(out)
    out=out.view(-1, self.n_outputs)
    return out, self.hidden

model = ImageRNN(batch_size, N_INPUTS, N_NEURONS, drop_prob, N_OUTPUTS,n_layers)
print(model)

1 Like

I guess you are passing your output directly to your criterion, which seems to be nn.CrossEntropyLoss.
Note that your model returns two outputs, i.e. out and self.hidden.
You could assign two output variables to your model’s output:

output, hidden = model(data)

or pass output[0] to your criterion.

7 Likes

Hi,
I have a similar problem as above and I assigned output and hidden separately, but I still get the same error of the log_softmax. I’m using CrossEntropy as well.
class RNN(nn.Module):

def __init__(self, num_classes, input_size, hidden_size, num_layers,output_size):
    super(RNN, self).__init__()

    self.num_classes = num_classes
    self.num_layers = num_layers
    self.input_size = input_size
    self.hidden_size = hidden_size
    self.out = torch.nn.Linear(hidden_size, input_size)

    self.rnn = nn.RNN(4, 5,2, batch_first=True)
    self.fc = nn.Linear(hidden_size, output_size)

def forward(self, x):
    # Initialize hidden and cell states
    # (num_layers * num_directions, batch, hidden_size) for batch_first=True
    batch_size = x.size(0)
    hidden = self.init_hidden(batch_size)
    out, hidden = self.rnn(x, hidden)
    
    out = out.contiguous().view(-1, self.hidden_size)
    out = self.fc(out)
    return out, hidden

def init_hidden(self, batch_size):
    hidden = torch.zeros(self.num_layers, batch_size, self.hidden_size)
    return hidden

Could you post the code, which raises the error, please?
out and hidden should both be tensors, so I’m not sure where the tuple is coming from.

So the error occurred at the torch.max line during the training. Would you please help me on it? Thank you very much.

X = torch.Tensor(train_input_list)
class RNN(nn.Module):
def __init__(self, num_classes, input_size, hidden_size, num_layers,output_size):
    super(RNN, self).__init__()

    self.num_classes = num_classes
    self.num_layers = num_layers
    self.input_size = input_size
    self.hidden_size = hidden_size
    self.out = torch.nn.Linear(hidden_size, input_size)

    self.rnn = nn.RNN(4, 5,2, batch_first=True)
    self.fc = nn.Linear(hidden_size, output_size)

def forward(self, x):

    batch_size = x.size(0)
    hidden = self.init_hidden(batch_size)
    out, hidden = self.rnn(x, hidden)
    
    out = out.view(-1, self.hidden_size)
    out = self.fc(out)
    
    return out, hidden
def init_hidden(self, batch_size):
    hidden = torch.zeros(self.num_layers, batch_size, self.hidden_size)
    return hidden

rnn = RNN(num_classes, input_size, hidden_size, num_layers,1)
loss_func = nn.CrossEntropyLoss()
for epoch in range(num_epoch):
     Y_pred = rnn(X)
     out, predicted = torch.max(F.softmax(Y_pred,1), 1)```

Y_pred will be a tuple, since your model returns two tensors, out and hidden.
Assuming you would only like to use out to calculate the prediction, you could use:

out, predicted = torch.max(F.softmax(Y_pred[0], 1), 1)

Unrelated to this error, but note, that nn.CrossEntropyLoss expects raw logits as the model output, so you should not apply softmax or max on the output to calculate the loss. :wink:
I assume you are using this line of code for debugging/printing only.

I calculate my loss like this
loss = loss_func(Y_pred, Y)
Yeah, the error did occur when calculating the loss. So would you please suggest how should I solve the problem?

Y_pred will be a tuple, already mentioned. :wink:
If you want to use the out tensor as the model output, you should use loss_func(Y_pred[0], Y).

Thank you very much. This error has been fixed now. But when I try running my code, it seems that there is some problem with CrossEntropy similar to what you solved previously: IndexError: Target 1 is out of bounds… I think defined the data types right, but it still does not work somehow.

How did you specify output_shape while creating the model?
The output_shape should be the number of classes you are working with and the target should contain indices in the range [0, nb_classes-1].

Yes, that is the problem. Thank you so much.

I am also getting the same problem while prediction. Could you please help:

test_dl = DataLoader(tst_data, batch_size=64, shuffle=False)
test = []
print(‘Predicting on test dataset’)
for batch, _ in tst_data:
#test_dl:
#batch = batch.permute(0, 2, 1)
batch=batch.to(device)
print(batch.shape)
out = model.to(device)
y_hat = F.log_softmax(out, dim=1).argmax(dim=1) ### at this line I am receiving an error.

test += y_hat.tolist()

class LSTMClassifier(nn.Module):

def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
    super().__init__()
    self.hidden_dim = hidden_dim
    self.layer_dim = layer_dim
    self.lstm = nn.LSTM(input_dim, hidden_dim, layer_dim, batch_first=True)
    self.fc = nn.Linear(hidden_dim, output_dim)
    self.batch_size = None
    self.hidden = None
  

def forward(self, x):
    h0, c0 = self.init_hidden(x)
   
    print(x.size())
    out, (hn, cn) = self.lstm(x, (h0, c0))
    out = self.fc(out[:, -1, :])
    return out

def init_hidden(self, x):
    h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
    c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)

    print(h0.shape)
    print(x.size(0))
    print(layer_dim)
    return [t.to(device) for t in (h0, c0)]

Thank you in advance!

Error :: AttributeError: ‘LSTMClassifier’ object has no attribute ‘log_softmax’

Based on the error message it seems that you are trying to call log_softmax on the LSTMClassifier directly:

model = LSTMClassifier(args)
model.log_softmax

which won’t work.
In your code snippet you are however using F.log_softmax, so could you check, if you’ve replaced import torch.nn.functional as F with F = model?

I was not calling model.train and directly calculating the y_hat.So, I changed to -

test_dl = DataLoader(tst_data, batch_size=64, shuffle=False)
test = []
print(‘Predicting on test dataset’)
model = model.to(device)
for batch, _ in tst_data:
batch=batch.to(device)
out = model.train()(batch)
y_hat = F.log_softmax(out, dim=1).argmax(dim=1)

However, I got another issue not:
<torch.utils.data.dataset.TensorDataset object at 0x7f0d61a87490>
Predicting on test dataset
torch.Size([1, 4000, 500])
4000
1
torch.Size([4000])

RuntimeError Traceback (most recent call last)
in ()
6 for batch, _ in tst_data:
7 batch=batch.to(device)
----> 8 out = model.train()(batch)
9 y_hat = F.log_softmax(out, dim=1).argmax(dim=1)
10

5 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py in check_input(self, input, batch_sizes)
201 raise RuntimeError(
202 ‘input must have {} dimensions, got {}’.format(
→ 203 expected_input_dim, input.dim()))
204 if self.input_size != input.size(-1):
205 raise RuntimeError(

RuntimeError: input must have 3 dimensions, got 1

Could you please look.

The new error is raised by an RNN, as a 3-dimensional input is required as described in the docs so you would need to pass it as [seq_len, batch_size, features] in the default setting or [batch_size, seq_len, features], if batch_first is set to True.

Thank you. It works :slight_smile:

Traceback (most recent call last):
  File "/home/pxg/DAN/DANet/experiments/segmentation/train.py", line 282, in <module>
    trainer.training(epoch)
  File "/home/pxg/DAN/DANet/experiments/segmentation/train.py", line 214, in training
    loss = self.criterion(outputs, target)
  File "/home/pxg/DAN/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/pxg/DAN/DANet/encoding/parallel.py", line 130, in forward
    return self.module(inputs, *targets[0], **kwargs[0])
  File "/home/pxg/DAN/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/pxg/DAN/DANet/encoding/nn/loss.py", line 68, in forward
    return super(SegmentationLosses, self).forward(*outputs)
  File "/home/pxg/DAN/venv/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 904, in forward
    ignore_index=self.ignore_index, reduction=self.reduction)
  File "/home/pxg/DAN/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1970, in cross_entropy
    return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
  File "/home/pxg/DAN/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1295, in log_softmax
    ret = input.log_softmax(dim)
AttributeError: 'tuple' object has no attribute 'log_softmax'

Hello, can you help me with my question? Thank you very much.

input seems to be a tuple while you are expecting a tensor on which you could use .log_softmax. I’m not familiar with your use case and thus don’t know what input contains but you might want to unwrap or index it instead before using log_softmax.

 def forward(self, *inputs):
        if not self.se_loss and not self.aux:
            return super(SegmentationLosses, self).forward(*inputs)
        elif not self.se_loss:
            pred1, pred2, target = tuple(inputs)
            loss1 = super(SegmentationLosses, self).forward(pred1, target)
            loss2 = super(SegmentationLosses, self).forward(pred2, target)
            return loss1 + self.aux_weight * loss2

Thank you for your reply. This is the last wrong position. Can you help me see where the problem is?

The previous error points to:

loss = self.criterion(outputs, target)

which might be used inside one of the forward methods so you would have to check them.