delta_gt, p.anchor_tensor are both torch.cuda.FloatTensor, shift are scala,
delta_gt[0,i] has shape [1, 256] and p.anchor_tensor[i,0] has shape [256, 1], shift[0] is scala
so, why did this line go wrong? can’t float tensor divided by float tensor???
Is that a float tensor can only be divided by a double tensor???
No this is kind of a bug I guess I have seen the questions in stack exchanges I guess… a fix would be to call .double()
try :
delta_gt[0,i] = (shift[0] - p.anchor_tensor[i,0])/p.anchor_tensor[i,2].float()
Thx, but I also tried your method, while it still got errors as the same.
I guess this is a hard BUG in this framework? and I tried to convert to numpy to solve current issues
sorry for that I did not make this clear enough.
For shift[0], i mean shift is a python list to store python float number, and so is a scala
and it is true that shift[0] is on CPU. BUT it seems right that a gpu tensor could operate with python number, right?
and for this, I mean that python float number can only controlled by cpu and there is no need to convert them to gpu tensor to operate with gpu float tensor p.anchor_tensor[i,0], right?