Hi, my model output contains coordinates of rectangles within a canvas, and I am trying to get a pixelwise representation of this output from the coordinates representation, before applying the loss on the pixelwise representation :
# get prediction
ypred=forward(x,w)
# rasterize pred + test
ytrain=rasterize(ytrain,300,600)
ypred=rasterize(ypred,300,600)
# update loss
loss = get_loss(ytrain, ypred)
# get gradient
loss.backward()
# update weights
with torch.no_grad():
w -= lr * w.grad
I’ve build those two rasterization toy functions :
# rasterize tensor : for loop
def rasterize_toy(tn,w,h):
nsamples=tn.size()[0]
#nsamples=4
vtn=torch.zeros(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
#vtn=torch.empty(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
for i in range(nsamples): # for each sample
top=(tn[i]*5).long()
vtn[i]=add_tn_bg(vtn[i],h,w)
#vtn[i]=get_tn_bg(h,w)
vtn[i,top:,:,0]=255/255
vtn[i,top:,:,1]=255/255
vtn[i,top:,:,2]=255/255
#vtn=vtn.float()
return vtn
# rasterize tensor : index_put
def rasterize_toy2(tn,w,h):
nsamples=tn.size()[0]
top=(tn[0]*10).long()
print("top",top)
v=100#1.0 #255
#vtn=torch.zeros(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
vtn=torch.zeros(nsamples,h,w,dtype=torch.float, requires_grad=True)
indices=[(torch.ones(w)*top).long(),
torch.arange(0,w).long()]
values=torch.ones(w)*v
vtn[0]=vtn[0].index_put(indices, values)
return vtn
my model output contains coordinates of rectangles within a canvas, and I am trying to get a pixelwise representation of this output from the coordinates representation, before applying the loss on the pixelwise representation :
# get prediction
ypred=forward(x,w)
# rasterize pred + test
ytrain=rasterize(ytrain,300,600)
ypred=rasterize(ypred,300,600)
# update loss
loss = get_loss(ytrain, ypred)
# get gradient
loss.backward()
# update weights
with torch.no_grad():
w -= lr * w.grad
I’ve build those two alternative rasterization toy functions :
# rasterize tensor : for loop
def rasterize_toy(tn,w,h):
nsamples=tn.size()[0]
#nsamples=4
vtn=torch.zeros(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
#vtn=torch.empty(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
for i in range(nsamples): # for each sample
top=(tn[i]*5).long()
vtn[i]=add_tn_bg(vtn[i],h,w)
#vtn[i]=get_tn_bg(h,w)
vtn[i,top:,:,0]=255/255
vtn[i,top:,:,1]=255/255
vtn[i,top:,:,2]=255/255
#vtn=vtn.float()
return vtn
# rasterize tensor : index_put
def rasterize_toy2(tn,w,h):
nsamples=tn.size()[0]
top=(tn[0]*10).long()
print("top",top)
v=100#1.0 #255
#vtn=torch.zeros(nsamples,h,w,3,dtype=torch.float, requires_grad=True)
vtn=torch.zeros(nsamples,h,w,dtype=torch.float, requires_grad=True)
indices=[(torch.ones(w)*top).long(),
torch.arange(0,w).long()]
values=torch.ones(w)*v
vtn[0]=vtn[0].index_put(indices, values)
return vtn
but they are both generating this error when calling loss.backward()
after the rasterisation step :
RuntimeError: leaf variable has been moved into the graph interior
I’ve already checked the following sources :
source 1
link : GitHub - ksheng-/fast-differentiable-rasterizer: differentiable bezier curve rasterizer with PyTorch
problem : while this git propose ways of rasterizing data structures, it seems to me it doesn’t allow to use differentiable variables as indexes of the final rasterized image.
source 2
link : Leaf Variable moved into graph interior
problem :
- masked_scatter, gather and grid_sample functions seem not to match what I am trying to do
- index_put seems to match my needs but I based my second rasterization function on it and it generates the same error as the rasterization function based on for loops
Thanks in advance for your help