Hello! I am trying to apply DP to TVAE using Opacus.
Here is a code snippet of the training section:
self.decoder, optimizer_decoder, loader = privacy_engine.make_private_with_epsilon(
module=self.decoder,
optimizer=optimizer_decoder,
data_loader=loader,
epochs=self.epochs,
target_epsilon=target_epsilon,
target_delta=target_delta,
max_grad_norm=max_grad_norm,
)
for i in range(self.epochs):
for id_, data in enumerate(loader):
optimizer_encoder.zero_grad()
optimizer_decoder.zero_grad()
real = data[0].to(self._device)
mu, std, logvar = encoder(real)
eps = torch.randn_like(std)
emb = eps * std + mu
rec, sigmas = self.decoder(emb)
loss_1, loss_2 = _loss_function(
rec, real, sigmas, mu, logvar,
self.transformer.output_info_list, self.loss_factor
)
loss = loss_1 + loss_2
print(loss)
loss.backward()
optimizer_encoder.step()
optimizer_decoder.step()
self.decoder.sigma.data.clamp_(0.01, 1.0)
I get the following error after loss.backward():
TypeError: only integer tensors of a single element can be converted to an index
When I fake a training step with only the decoder, I get the loss but with the following error:
decoder(noise)[0].sum()
UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
Any help would be very appreciated, thank you!