Inplace operation not raising error during backward pass

Strange behaviour In Pytorch 0.4 does not raise an error for Inplace operation in my network during backward pass while the older version of pytorch 0.3 detects inplace operation raises an error

For debuging I tried:

trans_params = self.locator(out_seq[1])
print(trans_params.requires_grad)
trans_params.zero_()
trans_params.sum().backward()
pdb.set_trace()

No error!!!

When I try on the terminal with this small piece of code it properly raises an error

>>> a = torch.tensor([1,2,3.], requires_grad = True)
>>> out = a.sigmoid()
>>> out.zero_()
tensor([ 0.,  0.,  0.])
>>> out.sum().backward()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 89, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation