Hi,
I am learning about GCN (Graph convolution network) and I found some standard code online for training GCN using the Cora dataset.
I got the following results for two GCN models after the val_loss converges:
6 GCN hidden layers:
Train loss: 0.0066 | Train acc: 1.0000
Val loss: 13.1973 | Val acc: 0.5040
Test loss: 10.6857 | Test acc: 0.5430
16 GCN hidden layers:
Train loss: 0.6381 | Train acc: 0.7643
Val loss: 4.1288 | Val acc: 0.2800
Test loss: 3.8559 | Test acc: 0.2910
What I don’t understand is how is the Validation accuracy of 16 layers lower than 6 layers even though the Validation loss of 16 layers is less than that of 6 layers model.
Here is my function for calc Validation:
def eval_step(model: torch.nn.Module, data: Data, loss_fn: LossFn, stage: Stage)-> Tuple [float, float]:
model.eval()
mask=getattr(data, f"{stage}_mask")
logits = model(data.x, data.edge_index)[mask]
preds = logits.argmax(dim=1)
y = data.y[mask]
loss = loss_fn(logits, y)
acc = accuracy(preds, y)
return loss.item(), acc
Thank you for your time and help!