Why is requires_grad here false

I have a NN module which has the following parameters:

def __init__():
        self.a = torch.nn.Parameter(torch.randn(1))
        self.b = torch.nn.Parameter(torch.randn(10))
        self.c = torch.nn.Parameter(torch.randn((10, 1)))
        self.d = torch.nn.Parameter(torch.randn(10))

Now in my forward function I do something like:

def forward(z):
   out = list()
   for i in z:
       yh.append(self.a + torch.sum(self.b * torch.tanh(self.c @ i + self.)))

   return torch.Tensor(yh).view(z.shape[0], 1)

Now, this output tensor has gradients turned off. Is there some specific thing for converting a list of tensors to tensor with gradients?

torch.Tensor does not have a gradient in default.
You are converting a gradient-tensor to non-gradient tensor by casting torch.Tensor
I recommend you to use torch.cat or some other functions instead of torch.Tensor.