Does Pytorch support functional programming module?

Like Keras. If I have two input a, b, I want to design a model. Is it possible for the model to first take one input(ex. input a), so that model(a) become a new model/function that only takes one input(input b)?
Or are there any equivalent methods to design a network this way?
Really thanks.

Well, considering you can code whatever you want inside the forward function it should be possible to do “whatever” you can code. Could you put an example?

Of course. Thanks for you attention.

For example, There are two input: 1 is a latent vector, assume its size = (1, 500). Then we get a secondary input, which is a batch of 3D vertices coordinates, assume its size = (1000, 3), that is, there are 1000 (x, y, z) tuples.

I want to design a model that takes 500+3 as input size. That means, it takes first input, which makes it a new function whose input size is 3. Then this functions takes the secondary input and return a result. It is like functional programming.

One possible but brute-force way is just duplicate the first input 1000 times and concat them. However, I think there should be a better way.

Something like this?

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.layer = create_layer()
        self.flag = True

    def func_input1(self, x):
        return None

    def func_input2(self, x1, x2, x3):
        return None

    def forward(self, *inputs):
        if self.flag:
            x = self.func_input1(*inputs)
            self.flag = False
        else:
            x = self.func_input2(*inputs)
        return x

You can really code it however you want to. You can also pass multiple inputs and call methods inside forward.

Thanks for your reply. However, I am talking about another problem.
I want a model that can be seen as a function f(x, y). I give it a first input a, and I want to regard f(a, y) as g(y), a function that only takes y as argument.
And in the training procedure, there may be only 1 x-input but 1000 y-input for each input data item.

Well, regarding my previous answer the aforementioned function can work that way. Assuming that x is a tensor it may have learnable or non-learnable parameters.

You can simply store x in a model attribute self.x = a (equivalent to define g(y) and then use the forward function as g(y) itself.

You can actively modify self.x whenever you want.

from torch import nn
import torch

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        # "defining" f(a,y)
        self.a = torch.tensor(10)

    #g(y)
    def forward(self, y):
        return y+self.a

#Instantiate model
model = Model()
for _ in range(10):
    print('Defining x dynamically')
    model.a = torch.tensor(torch.rand(1))
    for _ in range(2):
       
        y = torch.rand(1)
        print('Running model for x = %.3f, y = %.3f' % (model.a.item(),y.item()))
        run1 = model(y)
        print(run1)

Running model for x = 0.456, y = 0.978
tensor([1.4344])
Running model for x = 0.456, y = 0.740
tensor([1.1966])
Defining x dynamically
Running model for x = 0.461, y = 0.845
tensor([1.3054])
Running model for x = 0.461, y = 0.893
tensor([1.3531])
Defining x dynamically
Running model for x = 0.328, y = 0.956
tensor([1.2839])
Running model for x = 0.328, y = 0.405
tensor([0.7331])
Defining x dynamically
Running model for x = 0.985, y = 0.218
tensor([1.2030])
Running model for x = 0.985, y = 0.172
tensor([1.1572])
Defining x dynamically
Running model for x = 0.479, y = 0.954
tensor([1.4331])
Running model for x = 0.479, y = 0.351
tensor([0.8302])
Defining x dynamically
Running model for x = 0.085, y = 0.192
tensor([0.2775])
Running model for x = 0.085, y = 0.371
tensor([0.4562])
Defining x dynamically
Running model for x = 0.844, y = 0.585
tensor([1.4286])
Running model for x = 0.844, y = 0.381
tensor([1.2249])
Defining x dynamically
Running model for x = 0.846, y = 0.239
tensor([1.0844])
Running model for x = 0.846, y = 0.473
tensor([1.3185])
Defining x dynamically
Running model for x = 0.506, y = 0.517
tensor([1.0226])
Running model for x = 0.506, y = 0.092
tensor([0.5975])
Defining x dynamically
Running model for x = 0.539, y = 0.125
tensor([0.6635])
Running model for x = 0.539, y = 0.834
tensor([1.3732])

Thank you!
In my specific case, I need to concatenate a latent vector (size = 500) with many (1000, For now) XYZ coordinates (x, y, z). That means I want a 503 dimension vector. I just don’t know how to do that.
And each pair of inputs (a latent vector, 1000 vertices) are also in batches.

Well that depends a lot on what kind of data do you have in that latent space. One option could be to use a Linear transformation to map 500–>1000 and then to expand that to 1000x3 (x y z)

Anyway have a look at this paper, https:FiLM
They propose a good general concat method.

Thank you! Really appreciate it.