Backpropagation with dictionaries

I am trying to write a custom loss function in which dictionary structure is compared. Essentially I call my model multiple times, and the output tensors are put into embedded dictionary structures (i.e.

dict = {‘0’: {‘0’: tensor_a1, ‘1’: tensor_a2}, ‘1’: {‘0’: tensor_b1, ‘1’, tensor_b2}}

etc.), with their position in this structure determined by the value inside the tensor. Is it possible to backpropagate through such a structure? For example, if tensor_b1 should have been in the dictionary with the ‘a’ tensors (based on comparison with an expected dictionary in the training data), I want to define a loss that contains this information and will be compatible with autograd.

Thanks,