Some question on inplace operation

I have got error message when I tried to run my code as:
"RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation"
However, I can’t find any inplace operation on my code.
Here’s my code:

    def forward(self, sentence, state):
        candidates = state.clone()
        for j in range(self.nslots):
            w_j = self.key_vector_FC[j](Variable(torch.Tensor([1])))
            gate_j = F.sigmoid(state[j] * sentence + state[j] * w_j)
            update = F.relu(self.U(state[j]) + self.V(w_j) + self.W(sentence), inplace=False)
            candidates[j] = candidates[j] + gate_j * update
            norm = candidates[j].norm(p=2, dim=0, keepdim=True)  # .detach()
            candidates[j] = candidates[j].div(norm)  # problem here!
        state = candidates
        return state

I tried to replace the division operation with

state[j] = state[j].div(2)

and the problem solved.
And I tried to use the .detach() method, which makes no efforts.

What should be the shape of sentence and state?

The second dim of state equals to the first dim of sentence.
For example,
sentence.shape = [5]
state.shape = [10,5]