Hello!

I’ve encountered a very strange behaviour of autograd.

I have to functions, say f(X) and g(X). And I want to compute the gradient of g(f(X)) with respect to X, which is a Variable.

Chunks of code

```
res = torch.norm(f(X))
res.backward()
```

and

```
res = torch.norm(g(X))
res.backward()
```

work well.

But when I do

```
res = torch.norm(f(g(X)))
res.backward()
```

my jupyter kernel dies.

Do you know what is wrong?

I’ll try to write a minimal working example of f and g and add it later.