I'm getting the error " Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0" while optimizing in my libtorch code

I’m pretty new to libtorch, i’m calling the below function inside a for loop for every new image.

torch::Tensor Tracker::optimize_cam_in_batch(torch::Tensor& cam_tensor, torch::Tensor gt_color, torch::Tensor gt_depth, int batch_size, NICE decoders)
{

	std::vector<torch::Tensor> cam_para_list{cam_tensor};
	cam_tensor = cam_tensor.requires_grad_(true);
	torch::optim::Adam optimizer(cam_para_list, torch::optim::AdamOptions(1e-2));
	optimizer.zero_grad();
	torch::Tensor c2w = get_camera_from_tensor(cam_tensor);
	torch::Tensor batch_rays_o, batch_rays_d, batch_gt_depth, batch_gt_color;

	get_samples(ignore_edge_h, H-ignore_edge_h, ignore_edge_w, W-ignore_edge_w, batch_size, H, W, fx, fy, cx, cy, c2w, gt_depth, gt_color, batch_rays_o, batch_rays_d, batch_gt_depth, batch_gt_color);

	torch::NoGradGuard noGrad;
	torch::Tensor det_rays_o = batch_rays_o.clone().detach().unsqueeze(-1);
	torch::Tensor det_rays_d = batch_rays_d.clone().detach().unsqueeze(-1);

	torch::Tensor t_ = (bound.unsqueeze(0)-det_rays_o)/det_rays_d;
	torch::Tensor t = std::get<0>(torch::min(std::get<0>(torch::max(t_, 2)),1));
	torch::Tensor inside_mask;
	inside_mask = t>=batch_gt_depth;
	batch_rays_d = batch_rays_d.index({inside_mask});
	batch_rays_o = batch_rays_o.index({inside_mask});
	batch_gt_depth = batch_gt_depth.index({inside_mask});
	batch_gt_color = batch_gt_color.index({inside_mask});
	
	torch::Tensor color, depth, uncertainity, weights, mask;
	batch_rays_d = batch_rays_d.to(torch::Device(torch::kCUDA, 0));
	batch_rays_o = batch_rays_o.to(torch::Device(torch::kCUDA, 0));
	batch_gt_depth = batch_gt_depth.to(torch::Device(torch::kCUDA, 0));
	batch_gt_color = batch_gt_color.to(torch::Device(torch::kCUDA, 0));
	renderer.render_batch_ray(c, decoders, batch_rays_d, batch_rays_o, "color", batch_gt_depth, color, depth, uncertainity, weights);

	if (handle_dynamic)
	{
		torch::Tensor tmp = torch::abs(batch_gt_depth-depth)/*/torch::sqrt(uncertainity+1e-10)*/;
		mask = (tmp < 10*tmp.median()) & (batch_gt_depth > 0);
	}
	else
		mask = batch_gt_depth > 0;

	torch::Tensor loss = ((torch::abs(batch_gt_depth-depth))/torch::sqrt(uncertainity+1e-10)).index({mask}).sum();
	if (use_color_in_tracking)
	{
		torch::Tensor color_loss = torch::abs(batch_gt_color - color);
		color_loss = color_loss.index({mask}).sum();
		loss = loss + w_color_loss*color_loss;
	}
	loss = loss.requires_grad_(true);
	loss.backward();
	std::cout<<"loss:"<<loss<<std::endl;
	optimizer.step();
	optimizer.zero_grad();
	return loss;
}

After a certain number of iterations, i’m getting the following error for optimizer.step()

loss:3163.71
[ CUDAFloatType{} ]
loss:3.08175e+07
[ CUDAFloatType{} ]
loss:3284.33
[ CUDAFloatType{} ]
loss:2990.17
[ CUDAFloatType{} ]
terminate called after throwing an instance of 'c10::Error'
  what():  Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'self'
Exception raised from checked_cast_variable at ../torch/csrc/autograd/VariableTypeManual.cpp:46 (most recent call first):

Is this related to the computation of the gradients?. Help is much appreciated. Thanks

Could you check the stacktrace via gdb to see which line of code is raising the error and which tensor might be undefined? I don’t see any obvious issues in your code so far.

getting the stacktrace via gdb pointed to a tensor that wasn’t properly defined. Fixed it now! Thanks!