train_loader = DataLoader(dataset=dataset, batch_size=40, shuffle=False)
" This is my train loader variable."
for epoch in range(num_epochs):
for i in enumerate(train_loader):
t = train_loader(-1,1).to(device)
outputs = model(t)
loss = criterion(outputs)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step, loss.item()))
This is sample data
13 0 -1
13 0 -1
13 0 -1
16 0 -1
12 0 -1
I converted them to tensor and i want train the data by passing them to model
and i’m unable to load the data into model.
As @damicoedoardo mentioned you should iterate for dataloader as-
for i, batch in enumerate(train_loader):
You should pass only the features of the batch, not the whole batch. In a normal supervised scenario, you will have len(batch) = 2, which means features = batch[0] and labels = batch[1]. And you will calculate predictions as outputs = model(features)
I don’t get how you are minimizing the loss functions here without passing labels? Something like loss = criterion(outputs, labels). Can you tell what criterion you are using?
And by calling .(-1, 1) I think you are trying to reshape the labels or maybe features to (-1, 1). If your cirterion is torch.nn.CrossEntropyLoss() then shape of the labels should be (-1), that’s it, no reshaping is required for labels there.
Also features shape should be (-1, n) where n is number of unique features in your single training example.
If you want to reshape either you can do as-
features = features.reshape(-1, n) # only if features's shape is not this already (put the value of n here)
labels = labels.reshape(-1, 1) # only if labels's shape is not this already
So your final traning loop should like -
for epoch in range(num_epochs):
for i, batch in enumerate(train_loader):
features = batch[0]
labels = batch[1]
features = features.reshape(-1, n) # only if features's shape is not this already, put the value of n here!!
labels = labels.reshape(-1, 1) # only if labels's shape is not this already
outputs = model(features)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()