Is there a cost-sensitive loss function implementation in PyTorch?

I would like to implement a cost-sensitive loss function in PyTorch. My two-class training dataset is heavily imbalanced, where 75% of the data are label ‘0’ and only 25% of the data are label ‘1’.

I am new to PyTorch but my supervisor is adamant that I use it (they have more experience with it). I found some implementations in Keras, but I am not that strong in coding to be able to port it over to PyTorch.

I have read around to find some resources to create a cost-sensitive loss function. This paper uses something which I think might work (IEEE Xplore Full-Text PDF:), but I do not understand how the code is implemented despite having access to it here (AttnSleep/util.py at f993511426900f9fca20594a738bf8bee1116381 · emadeldeen24/AttnSleep · GitHub).

This website describes the math very detailedly but I do not understand it: How to do Cost-Sensitive Learning | by Joe Tenini, PhD | Red Ventures Data Science & Engineering | Medium

Here is an implementation in Keras which I have trouble with converting to PyTorch: https://towardsdatascience.com/fraud-detection-with-cost-sensitive-machine-learning-24b8760d35d9

I also found this implementation in PyTorch, but have trouble with understanding it: Dealing with imbalanced datasets in pytorch - #21 by snakers41

Could you please help me to understand the last link’s implementation of the cost-sensitive loss function?

Thank you.