Backward keep graph cpp api

I don’t understand how to use the C++ API of Pytorch to retain the graph.

How can one achieve loss.backward(retain_graph=True) with it?

inline void Tensor::backward(
    c10::optional<Tensor> gradient,
    bool keep_graph,
    bool create_graph) {
  type().backward(*this, std::move(gradient), keep_graph, create_graph);
}

This is how the backward pass is implemented, but how can one access the keep_graph variable?

Here is an example of code that will crash. It will raise the error

the derivative for 'target' is not implemented

My main

#include <torch/torch.h>
#include <iostream>

int main() {

	// Layer.
	torch::nn::Linear linear1(5, 1);
	torch::nn::Linear linear2(5, 1);

	// Optimizer.
	torch::optim::Adam opt(linear1->parameters(), torch::optim::AdamOptions(0.001));

	// Input.
	torch::Tensor in = torch::randn({1, 5});

	// Output. Have one of these tensors. The first one wont work.
	torch::Tensor desired_out = linear2->forward(in); // raises the error "the derivative for 'target' is not implemented"
	//torch::Tensor desired_out = torch::ones({1, 1}); // works perfectly fine

	// Training loop.
	std::cout << "initial: " << linear1->forward(in) << std::endl;

	for (int i = 0; i < 1000; i++) {

		opt.zero_grad();
		torch::Tensor out = linear1->forward(in);
		torch::Tensor loss = torch::mse_loss(out, desired_out);
		loss.backward();
		opt.step();	
	}

	std::cout << "desired: " << desired_out << std::endl;
	std::cout << "trained: " << linear1->forward(in) << std::endl;

	return 0;
}

My Cmake

cmake_minimum_required(VERSION 3.11 FATAL_ERROR)

find_package(Torch REQUIRED)

include_directories(${Torch_INCLUDE_DIRS})

add_executable(main main.cpp)
target_link_libraries(main ${TORCH_LIBRARIES})

You don’t access it, but pass it. But you need to pass torch::nullopt for the gradient:

loss.backward(torch::nullopt, /*keep_graph=*/ True, /*create_graph=*/ False);

In your example, a better solution would be to detach_() the desired_out, though.

Best regards

Thomas

2 Likes

Hi Thomas and thanks for the reply, thats indeed what I have already tried, and it does not work for my system. I have now tried the latest binaries, so to me it rather looks like it was a bug.

The detach_() approach works ! :slight_smile: So for this simple example it is

torch::Tensor desired_out = linear2->forward(in).detach();

Yeah, the C++ abi is still in flux (I think there are nightly builds of it these days, though).
Glad this works for you!

Best regards

Thomas

Hey Thomas, the proposed solution no more seems to work with PyTorch C++ library. Could you kindly update the solution ?

Thanks :slight_smile:

What’s the error you’re getting?

Hi @tom!

I’m a bit of a late reader on this, but I find myself in a situation in which I need to try retain_graph=true.

Trying out your suggestion, I get an error like the following, which I don’t really understand:

error: no viable conversion from 'const c10::nullopt_t' to 'const at::Tensor'
loss.backward(torch::nullopt, true, false);

Do you happen to know why this happens?

I think you might try {} as the Tensor to take the default.

Best regards

Thomas

1 Like

its because the boolean type for retain_graph is not c++ primitive bool but c10::optional.
The following works for me:

loss.backward({},c10::optional<bool>(true), true);
1 Like