Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3

This code was working fine with torch 1.1 and cuda 10.0
but now I have RTX 3090 GPU which doesn’t support this torch version. so now I am using cuda 11.1 torch 1.9
what should I do to allow this code to work with this torch version without any errors
Thank you in advance

hey i am getting error: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function )
how to solve it?
class Detect(Function):
def init(self, conf_thresh=0.01, top_k=200, nsm_thresh=0.45):
self.softmax = nn.Softmax(dim=-1)# lấy softamx của dim cuối cùng
self.conf_thresh = conf_thresh
self.top_k = top_k
self.nms_thresh = nsm_thresh

@staticmethod
def forward(self, loc_data, conf_data, dbox_list):
    num_batch = loc_data.size(0)  # batch_size (2,4,6,...32, 64, 128)
    num_dbox = loc_data.size(1)  # 8732
    num_classe = conf_data.size(2)  # 21

    conf_data = self.softmax(conf_data)
    # (batch_num, num_dbox, num_class) -> (batch_num, num_class, num_dbox)
    conf_preds = conf_data.transpose(2, 1)

    output = torch.zeros(num_batch, num_classe, self.top_k, 5)

    # xử lý từng bức ảnh trong một batch các bức ảnh
    for i in range(num_batch):
        # Tính bbox từ offset information và default box
        decode_boxes = decode(loc_data[i], dbox_list)

        # copy confidence score của ảnh thứ i
        conf_scores = conf_preds[i].clone()

        for cl in range(1, num_classe):
            c_mask = conf_scores[cl].gt(self.conf_thresh)  # chỉ lấy những confidence > 0.01
            scores = conf_scores[cl][c_mask]
            if scores.nelement() == 0:  # numel()
                continue

            # đưa chiều về giống chiều của decode_boxes để tính toán
            l_mask = c_mask.unsqueeze(1).expand_as(decode_boxes)  # (8732, 4)
            boxes = decode_boxes[l_mask].view(-1, 4)  # (số box có độ tự tin lớn hơn > 0.01, 4)
            ids, count = nms(boxes, scores, self.nms_thresh, self.top_k)
            output[i, cl, :count] = torch.cat((scores[ids[:count]].unsqueeze(1), boxes[ids[:count]]), 1)

    return output

The link as well as this tutorial show how to write the new autograd.Functions and give you some examples.
In your code you are initializing data in the __init__ method, which is wrong.

By the way, if you want to directly affect the module using .apply(x) method when looping through self.features._modules.items(), use the code below:

for idx, module in self.features._modules.items():
   self.features._modules[idx] = YourTorchAutogradFunction.apply

dear muhamed ,
please i want to contact with you concerning that topic. thank u .

Hey guys, I’m having the same error. However, I’ve written an autograd function using torch.autograd.Function and using @staticmethod. However, I’m using a class object from outside the autograd function:

class BleuScoreLoss(torch.autograd.Function):
    
    @staticmethod
    def forward(ctx, input, target, weights=[0.25]):

        max_n = input.size(0)
        weights = weights * max_n

        functional = BleuScore(max_n, weights)

        score, derivative = functional.bleu_score(target, input)

        score, derivative = torch.tensor(score), torch.tensor(derivative)

        ctx.save_for_backward(derivative)

        return score

    @staticmethod
    def backward(ctx, grad_output):

        derivative, = ctx.saved_tensors
        derivative = derivative.unsqueeze(-1)
        print(derivative)
        print(grad_output)

        teste = grad_output * derivative

        print(teste)

        return grad_output * derivative, None

Note that BleuScore is an class defined outside the autograd Function without any staticmethods

class BleuScore:

    def __init__(self, max_n=4, weights=[0.25]*4, dictionary=dataset.dictionary):

        self.max_n = max_n
        self.weights = weights
        self.dictionary = dictionary


    def _n_gram_generator(self, sentence, n=2,remove_repeating=False):

   [...]

Could it be this BleuScore class that is triggering the error? I’ve created a simple neural network with adam optimizer and it seems that the autograd function is doing pretty fine, both in forward and in backward methods. The only counterpart is that I can’t use functions like autograd.functional.jacobian or gradcheck.

Are you sure the BleuScoreLoss is raising the warning and not another function? If so, how are you calling it in your code? Are you using its .apply method?

That’s the thing. When I call BleuScoreLoss in my code through .apply, it doesn’t raise any error and everything runs fine, including the backpropagation:

content_loss = BleuScoreLoss()
fake_phrases = TextGenerator(noise, text)
content_cost += content_loss.apply(fake_phrases, text)

However, when I try to use autograd.functional.jacobian or gradcheck, this error is triggered:

tensor1 = torch.randn((5,1), dtype=torch.double, requires_grad=True)
tensor2 = torch.randn((5,1), dtype=torch.double)

loss = BleuScoreLoss()

#test = torch.autograd.gradcheck(loss, tensor1)
test = torch.autograd.gradcheck(loss, [tensor1, tensor2])

print(test)
d:\Python\lib\site-packages\torch\autograd\gradcheck.py in _gradcheck_helper(func, inputs, eps, atol, rtol, check_sparse_nnz, nondet_tol, check_undefined_grad, check_grad_dtypes, check_batched_grad, check_batched_forward_grad, check_forward_ad, check_backward_ad, fast_mode)
   1421     _check_inputs(tupled_inputs, check_sparse_nnz)
   1422 
-> 1423     func_out = func(*tupled_inputs)
   1424     outputs = _differentiable_outputs(func_out)
   1425     _check_outputs(outputs)

d:\Python\lib\site-packages\torch\autograd\function.py in __call__(self, *args, **kwargs)
    313 
...
--> 315         raise RuntimeError(
    316             "Legacy autograd function with non-static forward method is deprecated. "
    317             "Please use new-style autograd function with static forward method. "

RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)

In your gradcheck call you are using BleuScoreLoss directly instead of its apply method as in the previous example, so try to use:

test = torch.autograd.gradcheck(loss.apply, [tensor1, tensor2])
1 Like