This is the key part in configure_optimizers() function in pytorch lightning:
params = []
train_names = []
print("Training only unet attention layers")
for name, module in self.model.diffusion_model.named_modules():
if isinstance(module, CrossAttention) and name.endswith('attn2'):
train_names.append(name)
params.extend(module.parameters())
else:
# Set requires_grad=False for all other parameters
for param in module.parameters():
param.requires_grad = False
opt = torch.optim.AdamW(params, lr=lr)
What I want to do here is only setting attention parameters requires_grad = True and passing them to the optimizer.
It did work if I set all the parameters with requires_grad = True.
I just found out that this error seems to be caused by the custom-defined class CheckpointFunction(torch.autograd.Function)
and I just made the code work by simply avoiding using this class
Hi, Yuanzhi, I met the same problem when I set the ‘attention layer’ trainable only. Could you clarify how to change the CheckpointFunction(torch.autograd.Function)?