Training input from frozen model?

I have a model that I trained on MNIST… what I’m trying to do is to use an optimizer to adjust the input so that it looks more like a picture of a 6.
I’m not sure if this is possible, since I looked at GANs and they all seem to have two networks to do what I am trying with one network.
Starting from random input…

for parameter in model.parameters():
    parameter.requires_grad = False
    print((param.shape, param.requires_grad))

input_data = torch.rand(1,1,28,28)
optimizer = optim.Adadelta(model.parameters(),
input_data.requires_grad = True
data =
target = torch.tensor([6], dtype=torch.long).to(device)

# the main training loop
for i in range(10):
    output = model(data)
    loss = F.nll_loss(output, target)
    pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability
    print(pred, output)
ninput = data.clone().detach().cpu().numpy()[0][0]


There are few things:

  1. GANs are a different architecture/loss to deal with generating images, you can do it with a simple MLP too (input, output and loss functions are the only things that matter - at least to me!).
  2. You have mentioned that you want use a trained network (freezed weights) and only update input to satisfy a loss (similar to number 6). But you have passed model.parameters() instead of input to the optimizer, so your optimizer does not know anything about input. Just pass inputs to optimizer!
  3. Put your model in model.eval() mode as it could have batch norm or dropout layers.

Other parts seem fine to me.