I cleaned up and modularized my model and solved the case with multiple samples all having the same output. It had to do with me passing the original inputs into a linear layer instead of the convolved inputs.
Since all the inputs contain similar sets of elements, except the elements are related in different ways, passing the inputs to a linear layer and aggregating each of their output elements just results in the same/similar output for all of them.
When I apply a convolution to each input so that each element of the input is updated with information from the other elements it’s connected to, then those differences become apparent in the convolved inputs, the linear layers outputs diverge between the convolved inputs, and aggregating those outputs can lead to very different final outputs between the inputs depending on how the elements in the inputs were connected.
At least, that’s what I think is going on and it’s alright now.