In short:
if while computing the loss from the reconstructed output with dimensions (B,N,N,T)
, I ignore the first index on the last element, meaning
recon = rec[:, :, :, 1:]
Would this screw with the back-propagation process? Also, is there a quick test to see if my implementation (any implementation in general) is breaking the back-propagation magic of PyTorch?
In details:
I’m trying to implement a Graph VAE loss function from the paper: https://doi.org/10.1186/s13321-019-0396-x
Graphs are discrete objects that are constructed of nodes and edges. The loss function shows if the reconstructed edges and nodes have the correct type. Basically a classification problem. According to the proposed method, I need to ignore one type and compute the loss based off of other types.
Now my question is if the reconstructed edges are presented using recon_adj
with dimensions (B,N,N,T)
where T
is the type of the edge. Would eliminating one type using
recon_adj = rec_adj[:, :, :, 1:]
mess with the backprop?