I get `TypeError: expected np.ndarray (got int)`

from this line in pytorch:

`a = Variable(torch.from_numpy(a).float().unsqueeze(0))`

any help for solving this would be appreciated.

I get `TypeError: expected np.ndarray (got int)`

from this line in pytorch:

`a = Variable(torch.from_numpy(a).float().unsqueeze(0))`

any help for solving this would be appreciated.

Is the variable a an int? Can you print out what it is and the type.

It is `0`

and the type is `<class 'int'>`

well that would be your problem. You can’t use from numpy on an int. Just do this

```
a = Variable(torch.tensor(a).float().unsqueeze(0))
```

thanks but the change would lead to another unsolvable error:

```
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1 and 3x256)
```

which comes from this:

```
def forward(self, a):
x = F.relu(self.linear1(a))
x = F.relu(self.linear2(x))
x = torch.tanh(self.linear3(x))
return x
```

You need to change your input shape in your linear layer to 1.

How is it possible to reshape the input which is `a`

? I used ` a = a.view(1)`

and error still exist.

No not to that large unless you want to copy it a bunch of times. If you want to do that and just copy the value to the shape you need you can do this:

```
a = torch.full(shape_you_need, a)
```

I don’t fully understand the error `RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x1 and 3x256)`

. Does it say the input of linear layer should be changed to (1x1)? Tried different things in `a = torch.full(shape_you_need, a)`

and the following error persists:

```
TypeError: full() received an invalid combination of arguments - got (tuple, Tensor), but expected one of:
* (tuple of ints size, Number fill_value, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
* (tuple of ints size, Number fill_value, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
```

Yes the error is basically saying that the input size of your linear layer and the shape of your input need to match but they do not. And can you provide the new code with torch.full?

```
a = torch.full((1,1), a)
```

The shape needs to be the same shape as the number of linear inputs so

```
a = torch.full((1,256), a)
```

also it needs to be the a before you turn it into a tensor so when it is still an int.