Best Practices: Using a buffer for the latent space?

Hi Everyone!
I’m learning polar coordinates so I have parameters r and theta. In the latent space I use r and theta to update a dictionary which requires some computation on r and theta with 4 other static tensors I create when the model is instantiated . Then I use the dictionary during the forward pass. I want to keep the dictionary update on the GPU. The 4 static tensors have no use outside of updating the dictionary and don’t need to persist in state_dict.

My question is what is the best practice in terms of registering the dictionary and the other tensors used to create it as buffers so i can use model.to(device) as opposed to .to(device) on every tensor to keep computation on the GPU?

Is it better to update a buffer or register_buffer with the new dictionary on every epoch?

Does registering buffers or constantly updating them have a significant effect on overall performance?

Thanks!