Unable to do next(model.parameters()) with Pyro models

Hello,

I am trying to convert myPyTorchModel into a Bayesian Pyro model (see code below).
However, when I execute the line next(myPyTorchModel.parameters(), StopIteration error pops up. I am wondering if there is any way to prevent Pyro models from generating the StopIteration error.

for m in myPyTorchModel.modules():

    for name, value in list(m.named_parameters(recurse=False)):

        options =  dict(dtype=torch.double, device="cpu")
        prior_loc = torch.zeros(1, 1, **options)
        prior_scale = torch.ones(1, 1, **options)
        zs = module.PyroSample(dist.Normal(prior_loc, prior_scale).to_event(1))
        zs.double()
        setattr(m, name, zs)
        
# generates an error
next(myPyTorchModel.parameters()).dtype

OUT:
Traceback (most recent call last):

  File "<ipython-input-37-f9ae77a21ef4>", line 1, in <module>
    next(myPyTorchModel.parameters()).dtype

StopIteration

How can I fix this error?

Thanks,

Apparently your myPyTorchModel doesn’t have parameters.
Also, if you make things double, you expect to have double dtype, no?

Hello,

Thank you for your reply. If myPyTorchModel.parameters() happens to be an empty generator, would it be okay for me to add a fake parameter onto the generator myPyTorchModel.parameters() just so that I wouldn’t get the StopIteration error?

If yes, how can I add a fake parameter onto the generator myPyTorchModel.parameters()?
:s thank you,

myPyTorchModel = torch.nn.Parameter(torch.zeros(1)) or so

Hello,

Thank you again for your reply.

So myPyTorchModel currently has no parameters.

When I execute the code below, the code generates an error:

# add a fake parameter to prevent the StopIteration error        
myPyTorchModel = torch.nn.Parameter(torch.tensor([0.], requires_grad=True,
                               dtype=torch.float64))

loss = myPyTorchModel(input_ids = input, labels = labels)[0]
OUT:
Traceback (most recent call last):
File "<ipython-input-20-52d7dadcb74d>", line 1, in <module>
    mc_loss = model(input_ids = input, labels = labels)[0]

TypeError: 'Parameter' object is not callable

The error above is generated because myPyTorchModel = torch.nn.Parameter(torch.tensor([0.], requires_grad=True, dtype=torch.float64)) does not add a callable (fake) parameter to myPyTorchModel.

Is there any way that I can add a fake “callable” parameter to myPyTorchModel? If yes, how can I do it?

Thank you very much again for your help.

PyTorch parameters aren’t callable, so I’ll probably have to pass and think you need a Pyro expert.