Where is the bias used in this example

I am bit confused and new to pytorch, i was going through pytorch with examples

y_pred = inputs * weights + bias

but in the below example where did they used bias

import numpy as np

# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10

# Create random input and output data
x = np.random.randn(N, D_in)
y = np.random.randn(N, D_out)

# Randomly initialize weights
w1 = np.random.randn(D_in, H)
w2 = np.random.randn(H, D_out)

learning_rate = 1e-6
for t in range(500):
    # Forward pass: compute predicted y
    h = x.dot(w1)
    h_relu = np.maximum(h, 0)
    y_pred = h_relu.dot(w2)

    # Compute and print loss
    loss = np.square(y_pred - y).sum()
    print(t, loss)

    # Backprop to compute gradients of w1 and w2 with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.T.dot(grad_y_pred)
    grad_h_relu = grad_y_pred.dot(w2.T)
    grad_h = grad_h_relu.copy()
    grad_h[h < 0] = 0
    grad_w1 = x.T.dot(grad_h)

    # Update weights
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2```


If you are using this page, note that the main purpose is to demonstrate the convention of PyTorch and its similarities and differences with numpy, etc. So, they are not really focusing on teaching deep learning stuff.

On the other hand, bias is not mandatory in learning models and can be omitted. For instance, in another example on this page (this link), it uses torch.nn.Linear(D_in, H) and in the comment it says that it calculates weights and biases. Although, if we see the documentation of nn.Linear, we see that bias is optional argument.


thanks for the info, i was wrong with bias