This works but the weights and bias values are random. How can I assign values to them without training over samples?

```
# nn.Linear
x = Variable(torch.Tensor([[1], [4]]), requires_grad=True)
# creates a wx + b layer
f = nn.Linear(1,1)
print(f.weight, f.bias)
y = f(x)
print(y)
```

I have tried the following to assign values to ‘weight’ and ‘bias’

```
f.weight = 2.0
f.bias = 1.0
f.weight = torch.Tensor([2])
f.bias = torch.Tensor([1])
f.weight = nn.Parameter(torch.Tensor([2]))
f.bias = nn.Parameter(torch.Tensor([1]))
```

None seems to work.

If you want to operate directly on the tensors containing the parameters, you can access them using `f.weight.data`

and `f.bias.data`

.

I am not sure what is your context, but maybe `state_dict()`

and `load_state_dict()`

are the recommended methods to save and load values for some model’s parameters.

4 Likes

Does this still apply now that `.data`

is deprecated alongside `Variable`

after PyTorch 0.4?

As you’ve mentioned, the `.data`

attribute and `Variable`

s are deprecated, so you could wrap the code in a `with torch.no_grad()`

block and manipulate the parameters via e.g. `param.copy_(tensor)`

.

1 Like