Hi there,
I am training a basic VAE on tabular data (standardized integers, real numbers, binary values and vectorized categories) and whatever I do, my validation loss is always considerably lower than my training loss. They also do not seem to get closer to each other whatsoever. Even from epoch 1:
Epoch 1 complete! Average validation Loss: 14.653213500976562
Epoch 1 complete! Average training Loss: 52.207223892211914
Is there something I am overseeing? This has been breaking my head and I do not know what to do. Also, I am not using any dropout layers in case you were wondering.
Thank you in advance!
My Training loop:
print("Start training VAE...")
model.train()
epochs = 1000
loss_list = []
val_list = []
for epoch in range(epochs):
overall_loss = 0
val_overall_loss = 0
with torch.no_grad():
for features in valloader:
xval = features
xval_hat, valmean, vallog_var = model(xval)
valloss = loss_function(xval, xval_hat, valmean, vallog_var)
#print(loss)
val_overall_loss += valloss.item()
print("\tEpoch", epoch + 1, "complete!", "\tAverage validation Loss: ", val_overall_loss/128)
val_list.append(val_overall_loss/128)
for features in trainloader:
x = features
optimizer.zero_grad()
x_hat, mean, log_var = model(x)
loss = loss_function(x, x_hat, mean, log_var)
#print(loss)
overall_loss += loss.item()
loss.backward()
optimizer.step()
print("\tEpoch", epoch + 1, "complete!", "\tAverage training Loss: ", overall_loss/128)
loss_list.append(overall_loss/128)
print("Finish!!")