Fast tensor appending operation

Hi there!
I am sort of a newbie with tensor manipulation, I have looked around for this, but I can’t seem to get anywhere, and I am looking for something using torch tensors that would allow me to do the following:

results = []
for xs in x:
    r = []
    for xss in xs:
        r.append(sift(xss))
    results.append(r)

Where x is a tensor, I am just going through its batch_size and feature_channels dimensions in these loops and applying a SIFT layer for each image patch group.
I would also like to be able to run this as fast as possible
Thanks in advance!

You can just do that as it is written, however, since xss comes from xs which comes from x you can run the whole xs in one run.

Sorry, how would I do it as it is written? This is implemented with python lists which are really slow. I want to make this as fast as possible to reduce training time.
It probably should also be noted that this code is from the forward method of a costum layer, while running the model on a gpu.

Yep but I mean, you are running python. Thus, you use python lists.
The way of avoiding lists is trying not to make use of loops and writting it as pytorch operations.

If you need to speed up anything and to avoid python lists the way to go is making a c++ implementation.
https://pytorch.org/docs/master/cpp_index.html

But in general, it is “strange” to use lists in a forward. Because for almost all the official layers you can usually proccess everything at once with bit of rewriting.

The only case in which you must need to use for loops is for very customized operators or for dinamically generated networks (when you create N layers based on a parameter and then you cannot manually call them all)

Another tool you is torchscript https://pytorch.org/docs/master/jit.html

But if you want the versatility and readability of lists, then you pay the price.

Basically what I need to do is I have a tensor of shape (b, c, h, w)
I have to process each of the (h, w) in the tensor separately with a custom layer that only takes bidimensional input, so I just iterated over the b and c dimensions and appended each result.
Isn’t there any other way to do this with a pytorch function?

You can try to extend the custom layer to process batches (all the layers in pytorch can process batches and most of the operations like matrix multiplication too)
and then process everything as a (b*c,h,w) tensor without envolving loops

That is what I am trying to do, but by just wrapping it around another costum layer that already feeds it what is can process.
I guess I will have to really change the inner structure of the costum layer, then.

as general advice I would recommend you to make all your custom layers to be able to process batches because that’s the standard way of working of pytorch.

Thanks for the advice. Properly noted, but unfortunately the costum layer isn’t mine, so I will probably just have to deal with it.
Thank you so much for the help anyway!