'numpy.ndarray' object has no attribute 'cuda'

en tapant :
biasRestaurant = to_np(m.ib(V(topRestIdx))) #converting the torch embedding to numpy matrix

j’aurai cette erreur :

AttributeError Traceback (most recent call last)
in ()
----> 1 biasRestaurant = to_np(m.ib(V(topRestIdx))) #converting the torch embedding to numpy matrix

2 frames
/usr/local/lib/python3.6/dist-packages/fastai/core.py in to_gpu(x, *args, **kwargs)
28 def to_gpu(x, *args, **kwargs):
29 if torch.cuda.is_available():
—> 30 return x.cuda(*args, **kwargs)
31 else:
32 return x


voici le code core.py


AttributeError: ‘numpy.ndarray’ object has no attribute ‘cuda’

from .imports import *

from .torch_imports import *

def sum_geom(a,r,n):

return a*n if r==1 else math.ceil(a*(1-r**n)/(1-r))

conv_dict = {np.dtype(‘int8’): torch.LongTensor, np.dtype(‘int16’): torch.LongTensor,

np.dtype('int32'): torch.LongTensor, np.dtype('int64'): torch.LongTensor,

np.dtype('float32'): torch.FloatTensor, np.dtype('float64'): torch.FloatTensor}

def T(a):

a = np.array(a)

if a.dtype in (np.int8, np.int16, np.int32, np.int64):

    return torch.LongTensor(a.astype(np.int64))

if a.dtype in (np.float32, np.float64):

    return torch.FloatTensor(a.astype(np.float32))

raise NotImplementedError

def V_(x): return to_gpu(x, async=True) if isinstance(x, Variable) else Variable(to_gpu(x, async=True))

def V(x): return [V_(o) for o in x] if isinstance(x,list) else V_(x)

def VV_(x): return to_gpu(x, async=True) if isinstance(x, Variable) else Variable(to_gpu(x, async=True), volatile=True)

def VV(x): return [VV_(o) for o in x] if isinstance(x,list) else VV_(x)

def to_np(v):

if isinstance(v, Variable): v=v.data

return v.cpu().Numpy ( )

def to_gpu(x, *args, **kwargs):

if torch.cuda.is_available():

    return x.cuda(*args, **kwargs)

else:

    return x

def noop(*args, **kwargs): return

def split_by_idxs(seq, idxs):

last, sl = 0, len(seq)

for idx in idxs:

    yield seq[last:idx]

    last = idx

yield seq[last:]

def trainable_params_(m):

return [p for p in m.parameters() if p.requires_grad]

def chain_params§:

if isinstance(p, (list,tuple)):

    return list(chain(*[trainable_params_(o) for o in p]))

return trainable_params_(p)

def set_trainable_attr(m,b):

m.trainable=b

for p in m.parameters(): p.requires_grad=b

def apply_leaf(m, f):

c = children(m)

if isinstance(m, nn.Module): f(m)

if len(c)>0:

    for l in c: apply_leaf(l,f)

def set_trainable(l, b):

apply_leaf(l, lambda m: set_trainable_attr(m,b))

def SGD_Momentum(momentum):

return lambda *args, **kwargs: optim.SGD(*args, momentum=momentum, **kwargs)

def one_hot(a,c): return np.eye©[a]

The cuda() method is defined for tensors, while it seems you are calling it on a numpy array.
Try to transform the numpy array to a tensor before calling tensor.cuda() via: tensor = torch.from_numpy(array).

ou j’ajoute ( tensor = torch.from_numpy(array)) ? dans le code source s’il vous plait ?

Hi Sarra,

could you use a translation service, please, as my French is quite bad? :stuck_out_tongue:

From Deepl:

“or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?”

If I understand it correctly, you would like to know, where to add this line of code?
Try to add it right before the .cuda() call on the array.

2 Likes

yes this is my question :slight_smile: ok i’ll try
but can i get your email ok other way to contact you please ? because i didn’t know where to put it :confused:

I guess topRestIdx might be a numpy array, so you could try to call:

topRestIdx = torch.from_numpy(topRestIdx)

before using it.

Also, Variables are deprecated since PyTorch 0.4, so you can use tensors now.

If your code still doesn’t work, could you please post a small code snippet to reproduce this issue by wrapping it into three backticks ```? :slight_smile:

you saved my life :smiley: thank you verry much sir :smiley:

1 Like

Haha, good to know it’s working now! :slight_smile:

1 Like

Hi, i have the same error and when i use this instruction new error is appear , " RuntimeError: mat1 dim 1 must match mat2 dim 0 "
and this my code
for batch_idx, inputs in enumerate(validloader):
inputs= torch.from_numpy(inputs)
if use_cuda:
inputs = inputs.cuda()
inputs = Variable(inputs)
hidden = self.encode(inputs)
Any help please and thanks a lot

Could you check the stack trace and see which function is raising this error?
Once you’ve isolated this function, could you post the shapes of all input tensors, so that we can have a look why this issue is raised?
If you cannot isolate the function, which raises the error, please post the complete stack trace here. :slight_smile:

this is the full code
class DenoisingAutoencoder(nn.Module):
def init(self, in_features, out_features, activation=“relu”,
dropout=0.2, tied=False):
super(self.class, self).init()
self.weight = Parameter(torch.Tensor(out_features, in_features))
if tied:
self.deweight = self.weight.t()
else:
self.deweight = Parameter(torch.Tensor(in_features, out_features))
self.bias = Parameter(torch.Tensor(out_features))
self.vbias = Parameter(torch.Tensor(in_features))

    if activation=="relu":
        self.enc_act_func = nn.ReLU()
    elif activation=="sigmoid":
        self.enc_act_func = nn.Sigmoid()
    self.dropout = nn.Dropout(p=dropout)

    self.reset_parameters()

def reset_parameters(self):
    stdv = 1. / math.sqrt(self.weight.size(1))
    self.weight.data.uniform_(-stdv, stdv)
    self.bias.data.uniform_(-stdv, stdv)
    stdv = 1. / math.sqrt(self.deweight.size(1))
    self.deweight.data.uniform_(-stdv, stdv)
    self.vbias.data.uniform_(-stdv, stdv)

def forward(self, x):
    return self.dropout(self.enc_act_func(F.linear(x, self.weight, self.bias)))

def encode(self, x, train=True):
    if train:
        self.dropout.train()
    else:
        self.dropout.eval()
    return self.dropout(self.enc_act_func(F.linear(x, self.weight, self.bias)))

def encodeBatch(self, dataloader):
    use_cuda = torch.cuda.is_available()
    encoded = []
    for batch_idx, inputs in enumerate(dataloader):
        inputs = np.reshape(inputs, [-1, 512])
        inputs = torch.Tensor(inputs)
        #inputs = inputs.view(inputs.size(0), -1).float()
        if use_cuda:
            inputs = inputs.cuda()
        inputs = Variable(inputs)
        hidden = self.encode(inputs, train=False)
        encoded.append(hidden.data.cpu())

    encoded = torch.cat(encoded, dim=0)
    return encoded

def decode(self, x, binary=False):
    if not binary:
        return F.linear(x, self.deweight, self.vbias)
    else:
        return F.sigmoid(F.linear(x, self.deweight, self.vbias))

def fit(self, trainloader, validloader, lr=0.001, batch_size=128, num_epochs=10, corrupt=0.3,
    loss_type="mse"):
    """
    data_x: FloatTensor
    valid_x: FloatTensor
    """
    use_cuda = torch.cuda.is_available()
    if use_cuda:
        self.cuda()
    print("=====Denoising Autoencoding layer=======")
    optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()), lr=lr)
    if loss_type=="mse":
        criterion = MSELoss()
    elif loss_type=="cross-entropy":
        criterion = BCELoss()

    # validate
    total_loss = 0.0
    total_num = 0
    for batch_idx, inputs in enumerate(validloader):
        #inputs = torch.Tensor(inputs)
        #inputs = np.reshape(inputs,[-1,512])
        #inputs=torch.Tensor(inputs)
        #inputs = inputs.view(inputs.size(0), -1).float()
        inputs= torch.from_numpy(inputs)
        print(type(inputs))
        if use_cuda:
            inputs = inputs.cuda()
        inputs = Variable(inputs)
        hidden = self.encode(inputs)
        if loss_type=="cross-entropy":
            outputs = self.decode(hidden, binary=True)
        else:
            outputs = self.decode(hidden)

        valid_recon_loss = criterion(outputs, inputs)
        total_loss += valid_recon_loss.data* len(inputs)
        total_num += inputs.size()[0]

    valid_loss = total_loss / total_num
    print("#Epoch 0: Valid Reconstruct Loss: %.3f" % (valid_loss))

Thanks for the code.
It works fine with some dumme inputs:

model = DenoisingAutoencoder(10, 10)
model(torch.randn(2, 10))

Feel free to post an executable code snippet, which could reproduce the error you are seeing.

PS: you can post code snippets by wrapping them into three backticks, which makes debugging easier. :wink:

sorry , but i can’t understand the code change what you are doing
this is my full code for denoisingAutoencoder:

import torch
import torch.nn as nn
from torch.nn import Parameter
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import tensorflow as tf
import numpy as np
import math
from udlp.utils import Dataset, masking_noise
from udlp.ops import MSELoss, BCELoss

class DenoisingAutoencoder(nn.Module):
    def __init__(self, in_features, out_features, activation="relu", 
        dropout=0.2, tied=False):
        super(self.__class__, self).__init__()
        self.weight = Parameter(torch.Tensor(out_features, in_features))
        if tied:
            self.deweight = self.weight.t()
        else:
            self.deweight = Parameter(torch.Tensor(in_features, out_features))
        self.bias = Parameter(torch.Tensor(out_features))
        self.vbias = Parameter(torch.Tensor(in_features))
        
        if activation=="relu":
            self.enc_act_func = nn.ReLU()
        elif activation=="sigmoid":
            self.enc_act_func = nn.Sigmoid()
        self.dropout = nn.Dropout(p=dropout)

        self.reset_parameters()

    def reset_parameters(self):
        stdv = 1. / math.sqrt(self.weight.size(1))
        self.weight.data.uniform_(-stdv, stdv)
        self.bias.data.uniform_(-stdv, stdv)
        stdv = 1. / math.sqrt(self.deweight.size(1))
        self.deweight.data.uniform_(-stdv, stdv)
        self.vbias.data.uniform_(-stdv, stdv)

    def forward(self, x):
        return self.dropout(self.enc_act_func(F.linear(x, self.weight, self.bias)))

    def encode(self, x, train=True):
        if train:
            self.dropout.train()
        else:
            self.dropout.eval()
        return self.dropout(self.enc_act_func(F.linear(x, self.weight, self.bias)))

    def encodeBatch(self, dataloader):
        use_cuda = torch.cuda.is_available()
        encoded = []
        for batch_idx, inputs in enumerate(dataloader):
            inputs = np.reshape(inputs, [-1, 512])
            inputs = torch.Tensor(inputs)
            #inputs = inputs.view(inputs.size(0), -1).float()
            if use_cuda:
                inputs = inputs.cuda()
            inputs = Variable(inputs)
            hidden = self.encode(inputs, train=False)
            encoded.append(hidden.data.cpu())

        encoded = torch.cat(encoded, dim=0)
        return encoded

    def decode(self, x, binary=False):
        if not binary:
            return F.linear(x, self.deweight, self.vbias)
        else:
            return F.sigmoid(F.linear(x, self.deweight, self.vbias))

    def fit(self, trainloader, validloader, lr=0.001, batch_size=128, num_epochs=10, corrupt=0.3,
        loss_type="mse"):
        """
        data_x: FloatTensor
        valid_x: FloatTensor
        """
        use_cuda = torch.cuda.is_available()
        if use_cuda:
            self.cuda()
        print("=====Denoising Autoencoding layer=======")
        optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()), lr=lr)
        if loss_type=="mse":
            criterion = MSELoss()
        elif loss_type=="cross-entropy":
            criterion = BCELoss()

        # validate
        total_loss = 0.0
        total_num = 0
        for batch_idx, inputs in enumerate(validloader):
            #inputs = inputs.view(inputs.size(0), -1).float()
            inputs= np.array(torch.from_numpy(inputs))
            print(type(inputs))
            if use_cuda:
                inputs = inputs.cuda()

            inputs = Variable(inputs)
            hidden = self.encode(inputs)
            if loss_type=="cross-entropy":
                outputs = self.decode(hidden, binary=True)
            else:
                outputs = self.decode(hidden)

            valid_recon_loss = criterion(outputs, inputs)
            total_loss += valid_recon_loss.data* len(inputs)
            total_num += inputs.size()[0]

        valid_loss = total_loss / total_num
        print("#Epoch 0: Valid Reconstruct Loss: %.3f" % (valid_loss))

        for epoch in range(num_epochs):
            # train 1 epoch
            train_loss = 0.0
            for batch_idx, inputs in enumerate(trainloader):
                #inputs = inputs.view(inputs.size(0), -1).float()
                inputs= np.reshape(inputs,[-1,512])
                inputs=torch.Tensor(inputs)
                inputs_corr = masking_noise(inputs, corrupt)
                if use_cuda:
                    inputs = inputs.cuda()
                    inputs_corr = inputs_corr.cuda()
                optimizer.zero_grad()
                inputs = Variable(inputs)
                inputs_corr = Variable(inputs_corr)

                hidden = self.encode(inputs_corr)
                if loss_type=="cross-entropy":
                    outputs = self.decode(hidden, binary=True)
                else:
                    outputs = self.decode(hidden)
                recon_loss = criterion(outputs, inputs)
                train_loss += recon_loss.data*len(inputs)
                recon_loss.backward()
                optimizer.step()

            # validate
            valid_loss = 0.0
            for batch_idx,inputs in enumerate(validloader):
                inputs = np.reshape(inputs, [-1, 512])
                inputs = torch.Tensor(inputs)
                #inputs = inputs.view(inputs.size(0), -1).float()
                if use_cuda:
                    inputs = inputs.cuda()
                inputs = Variable(inputs)
                hidden = self.encode(inputs, train=False)
                if loss_type=="cross-entropy":
                    outputs = self.decode(hidden, binary=True)
                else:
                    outputs = self.decode(hidden)

                valid_recon_loss = criterion(outputs, inputs)
                valid_loss += valid_recon_loss.data * len(inputs)

            print("#Epoch %3d: Reconstruct Loss: %.3f, Valid Reconstruct Loss: %.3f" % (
                epoch+1, recon_loss , valid_recon_loss))

The model is still working, if I use some random values to initialize it and run a simple forward pass:

model = DenoisingAutoencoder(10, 10)
out = model(torch.randn(1, 10))

Could you post the code snippet where you are initializing the model and which input shape you are using?
If you are passing numpy arrays as the input, make sure to transform them to PyTorch tensors via torch.from_numpy.

this is the code to train data:
“”"
X_train, y_train = load_data(root_folder_train)
X_test, y_test = load_data(root_folder_test)
in_features = 512
out_features = 256

X_train= torch.Tensor(X_train)
X_test=torch.Tensor(X_test)
sdae = StackedDAE(input_dim=512, z_dim=10, binary=True,
    encodeLayer=[256,100,10], decodeLayer=[10,100,256], activation="relu",
    dropout=0)
sdae.pretrain(X_train,X_test, lr=args.lr, batch_size=args.batch_size,
              num_epochs=args.pretrainepochs, corrupt=0.3, loss_type="cross-entropy")
sdae.save_model("model/sdae.pt")
sdae.fit(X_train,X_test, lr=args.lr, num_epochs=args.epochs, corrupt=0.3, loss_type="cross-entropy")

“”“”

In these lines of code you are transforming the tensor back to a numpy array, which would yield this error:

            inputs= np.array(torch.from_numpy(inputs))
            print(type(inputs))
            if use_cuda:
                inputs = inputs.cuda()

remove the np.array call and just use tensors.

PS: Variables are deprecated since PyTorch 0.4, so you can use tensors now. :wink: