Expected object of scalar type Long but got scalar type Float

Here is the code I have:

import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms
from torch.utils.data import DataLoader, Dataset, TensorDataset

bs=1
from torchvision.models import resnet18
model = resnet18(pretrained=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)

input = torch.rand(bs,3, 256,256)
target = torch.rand(bs, 1000)
print(target.size())

# target = target.long()
model.train()

# before we learn
p1 = model.fc.weight
od1 = model.state_dict()


output = model(input)
# print(output.size())
loss_fn = torch.nn.CrossEntropyLoss()
# loss_fn = torch.nn.BCEWithLogitsLoss()
loss = loss_fn(output, target)
print(loss)
if optimizer is not None:
        optimizer.zero_grad()
        # backward pass + optimize
        loss.backward()
        optimizer.step()

p2 = model.fc.weight
od2 = model.state_dict()

print(torch.equal(p1,p2))

torch.nn.CrossEntropyLoss() should be the main loss function (criterion) for ImageNet classification. I just tried to test the code and got this error I could not fix fast.
What I am expecting at the end is that print(torch.equal(p1,p2)) will return False. This means that the parameter before and after the single optimization step will be different ( the model learned something ).

If I set the torch.nn.BCEWithLogitsLoss() it will work but the params p1 and p2 will be just the equal.

What am I doing wrong?

1 Like

Hi Barber!

I assume that by “got this error” you mean the error message
in your post title, “Expected object of scalar type Long but got
scalar type Float.” This was to be expected.

CrossEntropyLoss takes integer class labels for its target,
specifically of type LongTensor. In your code you are passing
it a FloatTensor.

For random “toy” data, you probably want something like
target = torch.randint (nClass, (bs,)), where
nClass is the number of classes in your classification problem.

(I see that you commented out the line of code
# target = target.long(). Didn’t that fix this
particular error for you.)

(As a further note, BCEWithLogitsLoss() does take a
FloatTensor target, so it would not give you this error. But,
yes, for a multiclass – i.e., not binary – classification problem
you would want CrossEntropyLoss.)

Good luck.

K. Frank

2 Likes

Thanks Frank, I seen now…

by setting

target = torch.randint (1000, (1,))
print(target.size())

instead

target = torch.rand(bs, 1000)
print(target.size())

I am telling I will have just a single target value ranging from 1000 ImageNet classes.
And really this code will work now.

(The # target = target.long() was something I saw in some other answers on the similar title, we may ignore that).

Now the code looks like this:

import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import transforms
from torch.utils.data import DataLoader, Dataset, TensorDataset

bs=1
from torchvision.models import resnet18
model = resnet18(pretrained=False)
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)

input = torch.rand(bs,3, 256,256)
target = torch.randint (1000, (1,))

model.train()

# before we learn
p1 = model.fc.weight
od1 = model.state_dict()


output = model(input)
# print(output.size())
loss_fn = torch.nn.CrossEntropyLoss()
# loss_fn = torch.nn.BCEWithLogitsLoss()
loss = loss_fn(output, target)
print(loss)
if optimizer is not None:
        optimizer.zero_grad()
        # backward pass + optimize
        loss.backward()
        optimizer.step()

p2 = model.fc.weight
od2 = model.state_dict()

print(torch.equal(p1,p2))

or I may use p1 = model.conv1.weight (same for p2) but the print(torch.equal(p1,p2)) will return True.

Can anyone tell why the parameters haven’t been updated. I expected I should learn something with the single optimizer.step(). In that case p1 should not be equal to p2.

1 Like

Hello Barber!

p1 is a reference to model.fc.weight. (In python (essentially)
all variables are references to objects.) So when your optimizer
updates model.fc.weight, p1 also “changes” in that the object
p1 refers to has changed.

Try setting p1 = model.fc.weight.clone(). Now p1 will refer
to a new copy of the data in model.fc.weight that won’t be
changed when model.fc.weight itself is changed.

Or you could print out (with enough precision to see a small
change) a couple of elements of model.fc.weight before and
after the optimization step and note that they change.

Good luck.

K. Frank

1 Like