This is my network:

```
class Net(nn.Module):
def __init__(self, device, channels=[2,10,10,10,10,10,1]):
super(Net, self).__init__()
weights = []
biases = []
gammas = []
N = len(channels)
self.layers = N-1
for i in range(self.layers):
weights.append(nn.Parameter(torch.randn((channels[i],channels[i+1]))).to(device))
biases.append(nn.Parameter(torch.randn((channels[i+1]))).to(device))
gammas.append(nn.Parameter(torch.randn((channels[i+1]))).to(device))
self.weights = weights
self.biases = biases
self.gammas = gammas
self.nl = torch.tanh
def forward(self, inp):
for i in range(self.layers-1):
x_1 = torch.mm(inp,self.weights[i])
x_2 = x_1 + self.biases[i]
x_2 = x_2 * self.gammas[i]
inp = self.nl(x_2)
x_1 = torch.mm(inp,self.weights[self.layers-1]) + self.biases[self.layers-1]
out = x_1 * self.gammas[self.layers-1]
return out
```

Now when I try to use autograd on the input:

```
out=net(inp)
out_inp = torch.autograd.grad(out,inp, create_graph=True)
```

I get two errors.

- If I use more than one input in batch size say a 4x2 matrix as
`inp`

, where 4 samples used as inputs, I get the following error:

```
RuntimeError: grad can be implicitly created only for scalar outputs
```

However, for such a network these are 4 different samples so it should be able to compute 4 different gradients. Is there a way to get this effect?

- If I use the a batch size of 1, I get the following error:

```
RuntimeError: One of the differentiated Tensors does not require grad
```

So, I guess I have set requires_grad True for `inp`

. However, I cannot do that because it is a tensor object that I use as input. Is there any way I can do this?

Probably I should elaborate. When I try to make requires_grad true I get the following error:

```
TypeError: as_tensor() got an unexpected keyword argument 'requires_grad'
```

Thanks in advance!