LoRA + DP-SGD tutorial

Dear Opacus users,

We have updated our tutorial on DP fine-tuning of a language model to demonstrate the usage of LoRA (low-rank adaptation) with DP-SGD: https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb

LoRA is a parameter-efficient fine-tuning method that allows for training significantly fewer parameters while maintaining on-par accuracy. You can combine it with Opacus training with a few lines of code and no conceptual changes to the privacy analysis.

Feel free to take a look!