I’m still fairly new to PyTorch, and thus far all examples of neural nets I’ve seen have rigid input and output dimensions, they’re defined when the network is initialized.

Here’s my situation: There’s this board game called Can’t Stop, I won’t get into specifics but the important part is that every turn the player rolls 4 die, and has to pair them up. There are limits on what pairings are possible, so **it’s not always the same number of dice pairings**.

Our neural net has to intake the board state and the possible dice pairings, and output the probability of winning for each possible dice pairing. The problem is that since there is a variable number of possible dice pairings per turn, **the input and output vectors will not always have the same size**. Does this mean that I’d have to instantiate a new neural network whenever the number of possible dice pairings changes? Wouldn’t this make training impossible? Are there workarounds, such as having the input and output always contain the maximum number of dice pairings and whenever a certain pairing isn’t possible just leave that spot in the tensor blank?

Please offer any insights you may have

you could just use input vectors and output vectors large enough to encompass all possible outcomes and just use padding when needed to accommodate different sizes as you said.

From a technical point of view, you can forwar propagate anything you like, at each time step. backpropagation will work fine (ish: within the constraints that you use functions that have backprop implemented).

From a conceptual point of view, what you will propagate is I guess out of scope of pytorch itself, and more of a research issue?

Note that conv nets can handle inputs of arbitrary size. You can look at neuralstyle for an example that forwards images of arbitrary size through a pretrained convnet. rnns can also handle inputs of arbitrary size.