LoRA trainable parameters

Hi,
When I print my model’s parameters after adding LoRA to the decoder, I get:

...
up_tr256.up_conv.weight
up_tr256.up_conv.bias
up_tr256.ops.0.conv1.lora_A
up_tr256.ops.0.conv1.lora_B
up_tr256.ops.0.conv1.conv.weight
up_tr256.ops.0.conv1.conv.bias
up_tr256.ops.0.bn1.weight
up_tr256.ops.0.bn1.bias
up_tr256.ops.1.conv1.lora_A
up_tr256.ops.1.conv1.lora_B
up_tr256.ops.1.conv1.conv.weight
up_tr256.ops.1.conv1.conv.bias
up_tr256.ops.1.bn1.weight
up_tr256.ops.1.bn1.bias
up_tr128.up_conv.weight
up_tr128.up_conv.bias
...

Can I now freeze my model and only train ‘lora_’ params? Cause they don’t have .weight in their names.