Hey, guys! I’m using Google Colab and I’m facing this error and don’t know how to fix it. Can you help to solve this error: “ValueError: only one element tensors can be converted to Python scalars”
You try and convert the output of your model to a python scalar with .item() but it complains that it has more than one element, so it cannot do that.
What is the size of your model’s output? What do you try to achieve with this .item() call?
I want to predict a product’s price throughout 24 hours during one day, my dataset has 1488 records, I’m using 1000 values to training and 488 to test my model.
Initially the test_inputs item will contain 24 items. Inside a for loop these 24 items will be used to make predictions about the first item from the test set, for instance, the item number 1001. The predict value will then be appended to the test_inputs list. During the second iteration, again the last 24 items will be used as input and a new prediction will be made which will then be appended to the test_inputs list again.
model.eval()
for i in range(fut_pred):
seq = torch.FloatTensor(test_inputs[-train_window:])
with torch.no_grad():
model.hidden = (torch.zeros(1, 1, model.hidden_layer_size),
torch.zeros(1, 1, model.hidden_layer_size))
test_inputs.append(model(seq).item())
Hi,
i have the same error , but i found that the error will be generated when it reaches the last epoch of training,
this is the line when the error is generated :
“”" dae.fit(trloader, valoader, lr=lr, batch_size=batch_size, num_epochs=num_epochs, corrupt=corrupt, loss_type=“mse”) “”"
I need some help please,
“”" the full error “”""
stackedDAE.py", line 89, in pretrain
dae.fit(trloader, valoader, lr=lr, batch_size=batch_size, num_epochs=num_epochs, corrupt=corrupt, loss_type=“mse”)denoisingAutoencoder.py", line 95, in fit
inputs = np.reshape(inputs, [-1, 512])
File “<array_function internals>”, line 6, in reshape
PycharmProjects/test/venv/lib/python3.6/site-packages/numpy/core/_asarray.py", line 85, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: only one element tensors can be converted to Python scalars
“”" the code when the error is generated :
def fit(self, trainloader, validloader, lr=0.001, batch_size=128, num_epochs=10, corrupt=0.3,
loss_type=“mse”):
“”"
data_x: FloatTensor
valid_x: FloatTensor
“”"
use_cuda = torch.cuda.is_available()
if use_cuda:
self.cuda()
print("=====Denoising Autoencoding layer=======")
optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()), lr=lr)
if loss_type==“mse”:
criterion = MSELoss()
elif loss_type==“cross-entropy”:
criterion = BCELoss()
the dataset is a np array with shape (5850,256,2)
if i applie for batch_idx, (inputs,outputs) in enumerate(validloader): an other error will generated, i tested this
this is the entire code for denoising Autoencoder:
import torch
import torch.nn as nn
from torch.nn import Parameter
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import tensorflow as tf
import numpy as np
import math
from udlp.utils import Dataset, masking_noise
from udlp.ops import MSELoss, BCELoss
this is my datasets class:
“”“”“”"
def load_data(root_folder):
# These are delcarations of variables that have been used inside the for loop
final_list = list()
labels = list()
# It is iterating through both the categories Fatigue and Non Fatigue
k=0
for cat in category:
# It is taking and processing each file in the folder
for filename in os.listdir(root_folder + cat):
# Read each file for each category and drop the unnecessary columns
path = root_folder + cat + '/' + filename
df = pd.read_csv(path) # Read the CSV using inbuilt Pandas Function
# df.drop(index=0, axis=0, inplace=True) # Drop the first row, which contains
# the units of measurement
df.columns = ["time", "EEGO1", "EEGO2"] # Rename the columns for convinience and easy access of
# the columns
df.drop(['time'], axis=1, inplace=True) # Drop the time column, as we are not using it as a
# time series.
df.EEGO1 = pd.to_numeric(
df.EEGO1) # The data by default is in the form of an object, Convert each row into
# numeric or floating point
df.EEGO2 = pd.to_numeric(df.EEGO2)
print(filename, len(df))
# Split each file into 6 parts and then make each of them a new row by transposing
df_split = np.array_split(df, 450) # Split the dataset into 450 (115200/256=450) different sets.
for splitted_array in df_split:
final_list.append(np.array(
splitted_array)) # After splitting, we are appending all the splitted arrays into 1 single
# large array of 3 dimentions
# The following if-else block is used to create labels. We have taken '1' for Fatigue EEG and '0'
# for Non Fatigue EEG
# This is not the ideal way to create labels, but this is the most simplest way for this situation
if cat == 'Fatigue':
labels.append(0)
if cat == 'Non Fatigue':
labels.append(1)
#print (tf.concat(labels),[0,2995200])
#print(type(labels))
# Before returning, convert the lists to arrays and increase the dimentions for being able to feed into the
# Neural Network
return np.array(final_list), np.expand_dims(np.array(labels), axis=1)
if name == “main”:
parser = argparse.ArgumentParser(description=‘VAE MNIST Example’)
parser.add_argument(‘–lr’, type=float, default=0.002, metavar=‘N’,
help=‘learning rate for training (default: 0.001)’)
parser.add_argument(‘–batch-size’, type=int, default=128, metavar=‘N’,
help=‘input batch size for training (default: 128)’)
parser.add_argument(‘–pretrainepochs’, type=int, default=10, metavar=‘N’,
help=‘number of epochs to train (default: 10)’)
parser.add_argument(‘–epochs’, type=int, default=10, metavar=‘N’,
help=‘number of epochs to train (default: 10)’)
args = parser.parse_args()
As far as I can see, there is something wrong in X_train and X_test. can you print the dimensions of X_train and X_test please. Sorry, it is not as straightforward as I thought.