Implement bit-flips during training

Hi,
I read multiple papers about implementing bitflips during training to make your neural network more resistant against random bit flips.

i found two tools:

and

they are mainly focussing on images.

I just have a small NN(one relu layer) with numerical inputs and outputs

I made a kind of algorithm that takes one bias/weight from the input/relu or output layer and converts that to binary. Do a bitflip and converts it back.

I was thinking of implementing this after the feedforward stage. Now everytime i get a runtime error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [8, 4]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

This is my code:

for epoch in range(num_epochs):
    running_loss = 0
    for i, (inputs, labels) in enumerate(train_loader):
        inputs = torch.from_numpy(inputs).to(torch.float32)
        labels = torch.from_numpy(labels).to(torch.float32)

        optimizer.zero_grad()
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        
        net = bitflip(net)

        loss.backward()
        
        optimizer.step()

        running_loss += loss.item()

How can i implement the bitflip algorithm during training without getting the error?

Someone who can help me?