Hello,

in the `forward`

function of a neural network I would like, in the propagation between two layers, to insert a polar coordinate conversion (the output of the first layer is intended to be a set of 2D Cartesian coordinates).

How can I achieve that? Can I put it directly in the forward function or is it needed to create a new layer to do that? How does the backpropagation deal with that?

Sorry for the dumb question, but I am quite new to PyTorch.

Thanks in advance

Hi,

If you use a function that only use pytorch Tensors and pytorch functions, No need for a new layer. You can use the following in your forward pass (make changes for batch-size if you need):

```
def to_polar(x, y):
return (x**2 + y**2).sqrt(), torch.atan(y/x)
```

Thank you for the answer!

What if I have a `[21, 2]`

size tensor in which the first column is `x`

and the second `y`

?

What is an efficient way to make that conversion?

Then you need to replace the addition with a `.sum(-1)`

and the division by `t.select(-1, -1)/t.select(-1, 0)`

.

If you want a single Tensor for the output, you can use `torch.stack([r, theta], -1)`

to get an output of size [21, 2].

Perfect, I implemented it (your second strategy) and it was working fine as soon as I was having only one batch.

The problem is that I actually have a `[BATCH_SIZE, 42]`

(intermediate) input and I was reshaping it as `[21,2]`

when the batch was `1`

without any problem, simply using something like `h.view(21, 2)`

.

How to deal with that when `BATCH_SIZE`

is different from 1? How should I reshape it to fit the function `to_polar`

?

Sorry for the dumb question, but I am not very familiar with these sizing issues.

Hi,

You can make it into a 3D Tensor of size [batch, 21, 2] and use the same function as above. Since we index with -1 (last dimension), it does not really matter how many there are before.

Also if the input is always of size 42, you can do `.view(-1, 21, 2)`

.