Class which holds some tensors and can .to() itself

I’m using torch to write some non neural net code, and I have something like:

class Worker:
    def __init__(self, fixed_matrix, fixed_neural_net):
        self.fixed_matrix = fixed_matrix
        self.fixed_neural_net = fixed_neural_net

    def do_work(self, data):
        # something with self.fixed_matrix,fixed_neural_net

I’d like to be able to use worker.cuda(), worker.to("cuda:1"), and things like this, and one approach would be to subclass nn.Module and use self.register_buffer(...) to store fixed_matrix, and fixed_neural_net is already a Module instance here (a neural net which is not training, and has requires_grad=False).

However, I don’t want all of the extra bells and whistles that Module brings with it related to NN training: things like self.training, forward(), the parameters, the forward and backward hooks, etc. To be clear: gradients will not be used at all in this setting.

I thought this kind of case might be common, so I was wondering if there is a simple way to just take the parts of Module which I want (really, all of the logic related to _buffers and _modules) and forget the rest. Or, maybe the rest is no big deal to have around? The main thing I don’t want to do is have to call requires_grad_(False) every time I initialize this Worker, I guess.