Hello,
I’m having trouble implementing a GLM where the y follows a Tweedie distribution using the stars models package. Is there. A way to do this in pytorch? I’ve searched and haven’t found any literature or posts on it.
UPDATE
I’ve tried to define a custom loss function as such:
def tweedieloss(predicted, observed):
'''
Custom loss fuction designed to minimize the deviance using stochastic gradient descent
'''
p = torch.tensor([1.5])
QLL = predicted**-p(((predicted*observed)/(torch.tensor([1])-p)) - ((predicted**2)/(torch.tensor([2])-p)))
QLL.cuda()
return -torch.abs(QLL)def tweedieloss(predicted, observed):
'''
Custom loss fuction designed to minimize the deviance using stochastic gradient descent
'''
p = torch.tensor([1.5])
QLL = predicted**-p(((predicted*observed)/(torch.tensor([1])-p)) - ((predicted**2)/(torch.tensor([2])-p)))
QLL.cuda()
return -torch.abs(QLL)
I’m still not sure if this is exactly correct however, I do know it’s giving me an error.
I’m using it in the following model:
# Create the linear regression model
model = nn.Linear(x.shape[1], 1)
# Loss and optimizer
criterion = tweedieloss
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)
# PyTorch uses float32 by default
# Numpy creates float64 by default
inputs = torch.from_numpy(X.astype(np.float32))
targets = torch.from_numpy(Y.astype(np.float32))
# Train the model
n_epochs = 20
losses = []
for it in range(n_epochs):
# zero the parameter gradients
optimizer.zero_grad()
# Forward pass
outputs = model(inputs)
loss = criterion(outputs, targets, p = torch.tensor([1.5]))
# keep the loss so we can plot it later
losses.append(loss.item())
# Backward and optimize
loss.backward()
optimizer.step()
print(f'Epoch {it+1}/{n_epochs}, Loss: {loss.item():.4f}')
```
The error I'm getting is the following:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-182-87301b9d4374> in <module>
9 # Forward pass
10 outputs = model(inputs)
---> 11 loss = criterion(outputs, targets)
12
13 # keep the loss so we can plot it later
<ipython-input-180-288c1e1a6f6d> in tweedieloss(predicted, observed)
5 p = torch.tensor([1.5])
6
----> 7 QLL = predicted**-p(((predicted*observed)/(torch.tensor([1])-p)) - ((predicted**2)/(torch.tensor([2])-p)))
8 QLL.cuda()
9 return -torch.abs(QLL)
RuntimeError: expected device cuda:0 but got device cpu
```
i'm not sure what in that custom loss function I should send to cuda. When I try to run everything with the CPU, I get the following:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-188-87301b9d4374> in <module>
9 # Forward pass
10 outputs = model(inputs)
---> 11 loss = criterion(outputs, targets)
12
13 # keep the loss so we can plot it later
<ipython-input-183-43f44140ed7b> in tweedieloss(predicted, observed)
5 p = torch.tensor([1.5])
6
----> 7 QLL = predicted**-p(((predicted*observed)/(torch.tensor([1])-p)) - ((predicted**2)/(torch.tensor([2])-p)))
8 return -torch.abs(QLL)
TypeError: 'Tensor' object is not callable
```
I'm not sure how to implement this custom loss function.