Ignore_index vs pack/unpack padding

I’ve been reading a lot about the pack/unpack padding operation, as well as the ignore_index parameter which has been added to some of the losses in the torch module.
I am wondering if using one means we can ignore the other?

For example in a variable length RNN-input if padding is used, and if we pack that, it will ensure that the padded token will not be used while evaluation.

Does that mean while performing loss(ignore_index=0) can be ignored, and vice-versa,
if we don’t pack/unpack the input, and just pad it with PAD token to make uniform length then using (ignore_index=0) will perform the same action like pack/unpack would ?

Basically, is there some relation between the two?