How to add a L1 or L2 regularization to weights in pytorch

In tensorflow, we can add a L1 or L2 regularizations in the sequential model. I couldn’t find equivalent approach in pytorch. How can we add regularizations to weights in pytorch in the definition of the net:

class Net(torch.nn.Module):
    def __init__(self, n_feature, n_hidden, n_output):
        super(Net, self).__init__()
        self.hidden = torch.nn.Linear(n_feature, n_hidden)   # hidden layer
        """ How to add a L1 regularization after a certain hidden layer?? """
        """ OR How to add a L1 regularization after a certain hidden layer?? """
        self.predict = torch.nn.Linear(n_hidden, n_output)   # output layer

    def forward(self, x):
        x = F.relu(self.hidden(x))      # activation function for hidden layer
        x = self.predict(x)             # linear output
        return x

net = Net(n_feature=1, n_hidden=10, n_output=1)     # define the network
# print(net)  # net architecture
optimizer = torch.optim.SGD(net.parameters(), lr=0.2)
loss_func = torch.nn.MSELoss()  # this is for regression mean squared loss

There are no l2 or l1 layers in pytorch. Instead you can use the adagrad optimizer found here. The weight decay parameter will add a l2 weight decay to your model.