Segmentation fault when using custom tensor

A minimal example to reproduce:

import torch


class MyTensor(torch.Tensor):
    pass

actions = torch.rand(size=(10, 3), device='cuda')
actions = MyTensor(actions).requires_grad_()+1



actions[:, 0] = 0.01
actions.retain_grad()

The last line of retain_grad() throws segmentation fault. If I comment out actions[:, 0] = 0.01, it works fine. Why is this happening?
Thanks in advance!

I cannot reproduce this issue in 2.0.0+cu118 and get:

class MyTensor(torch.Tensor):
    pass

actions = torch.rand(size=(10, 3), device='cuda')
actions = MyTensor(actions).requires_grad_()+1

actions[:, 0] = 0.01
actions.retain_grad()
print(actions)
# MyTensor([[0.0100, 1.0819, 1.2367],
#           [0.0100, 1.1117, 1.2771],
#           [0.0100, 1.3408, 1.4321],
#           [0.0100, 1.8467, 1.1795],
#           [0.0100, 1.9159, 1.1804],
#           [0.0100, 1.0648, 1.5493],
#           [0.0100, 1.3131, 1.5877],
#           [0.0100, 1.5165, 1.1700],
#           [0.0100, 1.6162, 1.8943],
#           [0.0100, 1.1664, 1.7574]], device='cuda:0',
#          grad_fn=<AsStridedBackward0>)

Could you update your PyTorch version in case you are using an older one and check if you are still seeing this issue?