Weight manipulation

Hi everyone!
I’m writing a program to add a watermark to a very simple FFNN model. To do this I have to manually change almost every weight in the model; the process is: I take the value, transform it into binary, change the LSB, transform it back into float.
Now, I managed to do everything, but I’m having a problem that I can’t solve and above all understand.
I take the value, modify it, check that the modification was successful and insert it into the specific weight, the problem is that when I go to verify that the value has been inserted, the value returns to the initial weight, and not the modified one, here I leave log data to make it better understood:

param item -1.2060359716415405
rounded param -1.2060359716415405
changed param -1.2060359716415407
-----
bin param 1011111111110011010010111110110001100000000000000000000000000000
bin changed 1011111111110011010010111110110001100000000000000000000000000001
-----
param final -1.2060359716415405
-----

and the code that creates it:

print(“param” + str(param[2,9]))
print("param item " + str(param[2,9].item()))
work_value = round(param[2,9].item(),16)
print("rounded param " + str(work_value))
lsb_changed = operation(work_value, 1)
print("changed param " + str(lsb_changed))

print(“-----”)
print("bin param " + str(float_to_bin(work_value)))
print("bin changed " + str(float_to_bin(lsb_changed)))

print(“-----”)

param[2,9] = lsb_changed

print("param final " + str(param[2,9].item()))

print(“-----”)

By working on it a bit I think, more or less, I have understood that it is a problem of precision of the representation of the value, when I go to insert the new value Pytorch changes it in some way that I don’t understand; but this is just a guess.

Could this be a shallow copying issue? Could you try cloning the tensor before you modify it?

1 Like

After I cloned it, what should I do to modify the copy or the clone?

You should modify the clone of the Tensor, as python defaults to shallow copying.

1 Like

After many hours I solved the problem, the problem was not the tensor, but the value entered. I would enter a decimal number with many digits after the decimal point, up to 20; I discovered (unfortunately after a long time) that pytorch (or the tensor) cannot handle more than 6 decimal places after the decimal point, so if I enter a number with x > 6 decimal places after the decimal point, it is not copied and therefore ignored.
I don’t know if the 6 is due to the fact that the tensor value is float32 or something else, so if this sensitivity can be increased.

Perhaps when you convert to a string it is bound by the print options, if that’s the case you could have a look at the `torch.set_printoptions`, and set that to a higher number?