RuntimeError: mat1 and mat2 must have the same dtype

Can anyone please suggest some solution.

You are running into the same issue as before so please refer to my previous post.

If you get stuck, post a minimal and executable code snippet by wrapping it into three backticks ``` as your current code is not properly formatted and cannot be executed.

Thank you for your suggestion @ptrblck , and I have modified and changed the in-features values and datatype of fc is similar to that of datatypes of fc weights as I mentioned in my previous posts. But though the mat1 and mat2 is similar, it still shows runtime error saying “RuntimeError: mat1 and mat2 shapes cannot be multiplied (480x8568 and 480x8568)” and both are float 32.
Can you please suggest the solution or root cause. Thank you.

Isolate which layer fails and set the in_features to 8568 as already explained.

for epoch in range(num_epochs):
for i, (data, labels) in enumerate(train_loader):
# origin shape: [50, 256]
# data = torch.tensor(data, dtype=torch.float32)
print(‘data type is’,type(data))
# labels = labels.to(device)
# print(i,“===”)
# print(data,“----”)
# print(labels,“****”)

    # Forward pass
    outputs = model(data)
    loss = criterion(outputs, labels). also getting the same issue

I am doing eigendecomposition of a matrix and multiplying it to weights (or any other variable which should remain real) doing some processing and then multiplying the conjugate back which will in turn create a fully real matrix but while multiplying I get this error.
RuntimeError('expected mat1 and mat2 to have the same dtype, but got: c10::complex<float> != float')

I know that sometimes the eigen decompositon can have imaginary eig-vecs but then how can I multiply with this real weight tensor that I have?

Addition:
I printed the dtypes of the matrices and they were: torch.float32 torch.complex64