'Tensor' object is not callable.how can i handle this,floks

TypeError: ‘Tensor’ object is not callable

how can i handle this,floks

Could you post a code snippet throwing this error?

I would generally recommend to use the factory method torch.tensor instead of torch.Tensor, since the latter will return uninitialized values if you provide a tensor shape.

the code is following:
if(epoch+1 )%20 == 0:
print(‘Epoch[{}/{}], loss: {:.6f}’
.format(epoch+1, num_epochs, loss.data()))

.data is an attribute not a method so you would have to remove the parentheses.
However, the usage of .data is generally not recommended anymore and you should use .item() instead.

wow great. it’s really a great help to me .thanks a lot.
when i remove the parantheses,it’s really worked!! Bravo

When I excute testing code, an error occured.

 File "/home/mamingrui/code/MyOwn/train_v1.py", line 61, in train
    grad_loss = grad_loss(flow)
TypeError: 'Tensor' object is not callable

This is the loss function.

def gradient_loss(s, penalty='l2'):
    dy = torch.abs(s[:, :, 1:, :, :] - s[:, :, :-1, :, :])
    dx = torch.abs(s[:, :, :, 1:, :] - s[:, :, :, :-1, :])
    dz = torch.abs(s[:, :, :, :, 1:] - s[:, :, :, :, :-1])

    if (penalty == 'l2'):
        dy = dy * dy
        dx = dx * dx
        dz = dz * dz

    d = torch.mean(dx) + torch.mean(dy) + torch.mean(dz)
    return d / 3.0

and this is the train.

    for epoch in range(iters):
        start_time = time.time()
        loss_batch = 0

        for batch_moving in train_set:
            print("^^^^^^moving size is {}^^^^^^".format(batch_moving.size()))
            wrap, flow = model(batch_moving, fixed)
            recon_loss = similarity_loss(wrap, fixed)
            grad_loss = grad_loss(flow)
            loss = recon_loss + reg_param * grad_loss

      
            loss_batch += loss

            print("The loss without average is {}".format(loss))

            opt.zero_grad()
            loss.backward()
            opt.step()

How can I fix this bug? very appreciate!

You are reusing the variable name grad_loss which will overwrite the function name here:

grad_loss = grad_loss(flow)

Change the return value to grad_loss_value or any other unused name.

2 Likes

WOW ! It worked well ! Thanks a lot ^-^

@ptrblck
Facing the same issue while implementing TableNET please help me out.
Reference I took : Extract Tables from Images Using Tablenet — An End-to-End Solution | by Namratesh Shrivastav | DataDrivenInvestor
Here are the code snippets.

Error when I call it.

It seems you are using Keras, so I would recommend to post the question in their discussion board or StackOverflow, as you might find Keras experts there. :wink:
Based on the error message I guess you are trying to cal a module, while it’s a tensor.

The tensor object is not callable. Got an error while trying to calculate accuracy.
I am new to PyTorch.

def accuracy(outputs, labels):
    _, preds = torch.max(outputs, dim=1)
    return torch.tensor(torch.sum(preds == labels).item() / len(preds))

n_total_steps = len(train_loader)
for epoch in range(epochs):
    for i, (images, labels) in enumerate(train_loader):
       
        images = images.to(DEVICE)
        labels = labels.to(DEVICE)

        # Forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)
        accuracy = accuracy(outputs, labels)
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if (i+1) % 185 == 0:
            print (f'Epoch [{epoch+1}/{epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}, accuracy: {accuracy:.2f}')

You are defining accuracy as a function name and are then assigning the result to a tensor with the same name. In the next iteration accuracy would thus be a tensor (not the function anymore) and the error is raised. Change either the function or tensor name and it should work.

Hey, @ptrblck Thanks.
I got it.

hello everyone
i get this error when i use from SGD-loss function. plz help me…

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.002, momentum=0.9)

num_epochs = 10
losses_SGD = []
for epoch in range(num_epochs):
    for i, (inputs, targets) in enumerate(train_dl):
        inputs = to_var(inputs)
        targets = to_var(targets)
        
         # forwad pass
        optimizer.zero_grad()
        outputs = model(inputs)
        
         # loss
        loss = criterion(outputs, targets)
        losses += [loss.data]
         # backward pass
        loss.backward()
        
         # update parameters
        optimizer.step()
        
         # report
        if (i + 1) % 50 == 0:
             print('Epoch [%2d/%2d], Step [%3d/%3d], Loss: %.4f'
                   % (epoch + 1, num_epochs, i + 1, len(train_ds) // batch_size, loss.data))

I’m not sure where the “‘Tensor’ object is not callable” error is raised, but your code will fail in:

losses_SGD += [loss.item_()]

since tensor.item_() is not a valid method:

criterion = nn.CrossEntropyLoss()
output = torch.randn(1, 10, requires_grad=True)
target = torch.randint(0, 10, (1,))
loss = criterion(output, target)
loss.item_()
# > AttributeError: 'Tensor' object has no attribute 'item_'

Oh! Sorry!! The code was modified. :point_up_2:
In this code, in line

losses += [loss.data]

the error is raised.
Thanks

It was miraculously solved! :star_struck:

Hi, I am getting this error with an object not being callable.

Judging by the line, it appears that target is a tensor and not a function/callable that returns a tensor as it is being treated.