Does losing index mess up backpropagation?

In short:

if while computing the loss from the reconstructed output with dimensions (B,N,N,T), I ignore the first index on the last element, meaning

recon = rec[:, :, :, 1:]

Would this screw with the back-propagation process? Also, is there a quick test to see if my implementation (any implementation in general) is breaking the back-propagation magic of PyTorch?

In details:

I’m trying to implement a Graph VAE loss function from the paper: https://doi.org/10.1186/s13321-019-0396-x

Graphs are discrete objects that are constructed of nodes and edges. The loss function shows if the reconstructed edges and nodes have the correct type. Basically a classification problem. According to the proposed method, I need to ignore one type and compute the loss based off of other types.

Now my question is if the reconstructed edges are presented using recon_adj with dimensions (B,N,N,T) where T is the type of the edge. Would eliminating one type using

recon_adj = rec_adj[:, :, :, 1:]

mess with the backprop?

When you index out an element, you are simply making sure that a particular activation at the end does not contribute to the loss function and subsequently does not contribute to changing the weights and biases. This would not break backprop. To illustrate this point with code:

import torch
import torch.nn as nn
import torch.nn.functional as F
batch_size = 6 ### Any arbitarary number
fc1 = nn.Linear(12,3)
input = torch.randn(batch_size,12) ### Making sure the second dimension is qual to 12
target = torch.randn(batch_size,2)
output = fc1(input)[:,:-1]
loss = F.mse_loss(output,target)
loss.backward()
1 Like