Hi Iram!
The equation you posted an image of means that you are to find
the values of z[i]
that minimize MSELoss (x[?], G (z))
, while
holding the parameters that define G
constant.
(I believe that the use of i
as the index on both sides of the
equation is a notational inconsistency – that’s why I used ?
as
the index for x
– but please explain if you believe otherwise.)
To use pytorch to do this, you would use pytorch’s autograd and
gradient-descent-optimization machinery to minimize your “loss”
with respect to z
.
To do this you would start with z
as a one-dimensional tensor
of length d
(shape [d]
), and wrap it as a Parameter
. Initialize
z
somehow – perhaps to zero or perhaps randomly or perhaps
to an initial guess, if you have one. Let me assume that G
is a
pytorch “model,” that is, some sort of Module
. Go through all of G
s
Parameter
s, and for each of them set requires_grad = False
.
(This is so that you won’t unnecessarily calculate gradients for the
Parameter
s of G
.) Instantiate a pytorch optimizer with z
as its
Parameter
. Then run an optimization loop who’s iteration is:
loss = torch.nn.MSELoss() (x, G (z))
opt.zero_grad()
loss.backward()
opt.step()
Each optimization step moves z
in the direction that lowers the
mismatch between G (z)
and x
.
Best.
K. Frank