How to share a convolution kernel's transpose to another kernel?

Thank you man. The reason I do

self.conv2.weight[0,0] = self.conv1.weight[0,0].t()

is that the convolution kernel is a 4d tensor, i.e.:

[torch.FloatTensor of size output_chanel x input_channel x kernel_height x kernel_width]

So I cannot just use conv2.weight = conv1.weight.t() to transpose all of the kernels.

But even I replace conv with linear layer, like that:

import torch
import torch.nn as nn


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.linear1 = nn.Linear(5, 10)
        self.linear2 = nn.Linear(10, 5)

        self.linear2.weight = self.linear1.weight.t()

    def forward(self, x):
        x = self.linear1(x)
        x = self.linear2(x)

        return x

net = Net()

Now I’m trying to share the whole weight tensor, but it still doesn’t work and shows that:

Traceback (most recent call last):
  File "/home/hdl2/Desktop/pytorchstudy/test.py", line 23, in <module>
    net = Net()
  File "/home/hdl2/Desktop/pytorchstudy/test.py", line 15, in __init__
    self.linear2.weight = self.linear1.weight.t()
  File "/home/hdl2/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 281, in __setattr__
    .format(torch.typename(value), name))
TypeError: cannot assign 'torch.autograd.variable.Variable' as parameter 'weight' (torch.nn.Parameter or None expected)

Then how should I do?