# Linear regression Basic dobt

I have some basic doubts.

1. If loss.backward() is done, there shouldn’t be requires_grad=True while creating x_data or y_data?
2. In line y_pred = model(x_data), How does this call forward function, In init , x_data is not taken as parameter or forward function is not called. How come is it is returning predicted y?
3. What does model(hour_var).data.item() Mean in the last statement?
Thanks

from torch import nn
import torch
from torch import tensor

x_data = tensor([[1.0], [2.0], [3.0]])
y_data = tensor([[2.0], [4.0], [6.0]])

class Model(nn.Module):
def init(self):
super(Model, self).init()
self.linear = torch.nn.Linear(1, 1) # One in and one out

``````def forward(self, x):
y_pred = self.linear(x)
return y_pred
``````

model = Model()

criterion = torch.nn.MSELoss(reduction=‘sum’)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

for epoch in range(500):
y_pred = model(x_data)
loss = criterion(y_pred, y_data)
print(f’Epoch: {epoch} | Loss: {loss.item()} ')
loss.backward()
optimizer.step()

# After training

hour_var = tensor([[4.0]])
y_pred = model(hour_var)
print(“Prediction (after training)”, 4, model(hour_var).data.item())

1. Usually you don’t care about the gradients in the input tensor or the target, so for most use cases you don’t need to set `requires_grad=True` for these tensors.

2. `nn.Module.__call__` is called, registers some hooks and calls `forward` as seen here.

3. Don’t use the `.data` attribute, as it might yield unwanted side effects. The `item()` call returns a Python literal, e.g. which can be used easily for logging purposes.

1 Like