AttributeError: 'numpy.ndarray' object has no attribute 'dim'

Hello Team,

i am new to the forum and to Pytorch, i want to predict 3 real values Y1, Y2 & Y3 from input values X1, X2…X10 using the below model, but i get the error in title:

File “/home/abdelmoula/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py”, line 833, in linear
if input.dim() == 2 and bias is not None:

AttributeError: ‘numpy.ndarray’ object has no attribute ‘dim’

Can you please on what is wrong ? thank you
########################

MODEL:

import pandas as pd
import torch
from torch.autograd import Variable

dataset = pd.read_csv(‘Welding.csv’)
dataset_test = pd.read_csv(‘Welding_test.csv’)
x=dataset.iloc[:,:-3].values
y=dataset.iloc[:,-3:].values
x_test=dataset.iloc[:,:].values

model = torch.nn.Sequential(
torch.nn.Linear(10, 20),
torch.nn.ReLU(),
torch.nn.Linear(20, 3),
)
loss_fn = torch.nn.MSELoss(size_average=False)

learning_rate = 1e-2
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(5000):
y_pred = model(x)
loss = loss_fn(y_pred, y)
print(t, loss.data[0])

optimizer.zero_grad()
loss.backward()
optimizer.step()

print (model(x_test))

You have to cast the numpy array into a Tensor and then wrap it into a Variable with x = Variable(torch.from_numpy(x)).

4 Likes

Hello Team,
I have the same problem in the implementation of the AutoEncoder
this is the class AE:
import torch.nn as nn
import torch
class AE(nn.Module):
def init(self):
super(AE,self).init()
input_shape=256
self.encoder_hidden_layer=nn.Linear(input_shape,10)
self.encoder_output_layer=nn.Linear(10,2)
self.decoder_hidden_layer=nn.Linear(2,10)
self.decoder_output_layer=nn.Linear(10,input_shape)
self.encoder_hidden_layer1 = nn.Linear(input_shape, 10)
self.encoder_output_layer1= nn.Linear(10, 2)
self.decoder_hidden_layer1 = nn.Linear(2, 10)
self.decoder_output_layer1= nn.Linear(10, input_shape)
def forward(self,features):
activation=self.encoder_hidden_layer(features)
activation=torch.relu(activation)
code=self.encoder_output_layer(activation)
code = torch.relu(code)
activation=self.decoder_hidden_layer(code)
activation=torch.relu(activation)
activation=self.decoder_output_layer(activation)
reconstructed=torch.sigmoid(activation)

    activation1 = self.encoder_hidden_layer1(reconstructed)
    activation = torch.relu(activation1)
    code = self.encoder_output_layer1(activation)
    code = torch.relu(code)
    activation = self.decoder_hidden_layer1(code)
    activation = torch.relu(activation)
    activation = self.decoder_output_layer1(activation)
    reconstructed = torch.sigmoid(activation)
    return reconstructed

when i train the model, the error in the first line of forward function
please how i can solve the problem?

It seems you are trying to pass a numpy array to your model, while you should use PyTorch tensors.
You could check it via:

print(type(input))

before feeding input to your model.
To transform the numpy array to a tensor use input = torch.from_numpy(input).

thanks but still the same error,
this is my model : i want that the output of the DCT is the input of the first autoencoder and the out put of this autoencoder is the input of the second autoencoder

if name==‘main’:

Load the training and testing dataset separately by calling the function for each of their root folder locations

X_train, y_train = load_data(root_folder_train)
X_test, y_test = load_data(root_folder_test)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)

Creating the model for EEG DATA:

1. DCT Application

tf_DCT=[]
N_samples = 256

for i in X_train:
    tf_DCT_train = DCT_funtion(i,N_samples)
    tf_DCT.append(tf_DCT_train)
input=torch.from_numpy(tf_DCT)
#print(tf_DCT_train.size)
#tf_DCT_test= DCT_funtionn(X_test,N_samples)
print("successful step")

#Model:
#Application of autoencoder:

model=AE()
classifiier= torch.nn.Softmax(model)
optimizer=optim.Adam(classifiier.parameters(),lr=1e-3)
criterion =torch.nn.MSELoss()

Could you post the stack trace and check the type of all inputs?

PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging a bit easier. :wink:

this is a stack trace of all type of variable

X_train, X_test, y_train, and y_test are still numpy arrays, so you would have to transform them to tensors.

Thanks for you, but a new error has appeared after the change of the array to tensor and the change i made in the implementation of the AE
this is a stack trace of the error and the AE

This error is raised, if you try to pass a Size object to as an argument to the layer creation:

x = torch.randn(10, 10)
nn.Linear(x.size(), 10)
> TypeError: new(): argument 'size' must be tuple of ints, but found element of type torch.Size at pos 2

Based on your previous code snippet, input_shape should be defined as input_shape = 256, so I assume you might have changed the code in the meantime.

PS: Please don’t post screenshots, but paste the code directly and wrap it into three backticks ```.

ok , but i want that the shape of DCT function be the input of the autoencoder that’s why i define the input_shape as the the output of DCT.shape
what advises you can give me and thanks a lot

You could either pass the size for all arguments as:

x = torch.randn(10, 20)
nn.Linear(*x.size())

or index the desired shapes and add the other arguments manually:

x = torch.randn(10, 20)
nn.Linear(x.size()[0], 30)

thanks but a new error is appeared also,
AttributeError: ‘int’ object has no attribute ‘softmax’
How i solve this problem?

This line of code might yield errors, as you are passing the complete model as an argument to nn.Softmax, which is wrong:

classifiier= torch.nn.Softmax(model)

Feel free to post executable code by wrapping in into three backticks ```, so that we can debug it.

1 Like