 # How to program y = x ^ 2 using MLP?

Hello, I am new in pytorch, I need help, how can I program a multilayer perceptron whose output is the function y = x ^ 2, starting from x = […- 2, -1,0,1,2 …]
I have tried, but I have only been able to get linear functions, like y = a * x + b

I’m not sure I understand the use case correctly, but your operation doesn’t seem to contain trainable parameters.

You could create the input tensor and just call `x**2` to get the output:

``````x = torch.linspace(-10, 10, 21)
y = x**2
``````

However, since no parameter is used, this won’t be trainable.

In the case of y = 3x, for example, it could be y = w1 * x, where w1 must be trained until reaching the value of 3, but in y = x ^ 2, I can’t find which parameter to train, I need help.

That’s exactly what I mean. If your target function is `x**2`, then there is nothing to train or would you like to make the exponent trainable?

the objective is to enter any value into the NN and that it returns, as output, the square of that number.

So should the exponent be trained?
If so, you could define it as a parameter and try to optimize it.
However, you would have to take care of negative input values (root of then yields NaN) and probably using the log will be more stable:

``````e = nn.Parameter(torch.empty(1).uniform_(0, 1))

data = torch.linspace(1, 10, 9)
target = torch.log(data**2)

optimizer = torch.optim.SGD([e], lr=1e-3)
criterion = nn.L1Loss()

for epoch in range(1000):
out = e * torch.log(data)
loss = criterion(out, target)
loss.backward()
optimizer.step()
print('epoch {}, loss {}, e {}'.format(epoch, loss.item(), e.item()))
``````

I thought about training the exponent, or better said, optimizing it, but I didn’t get anywhere, besides that I didn’t use MPL. I will test the code you uploaded, thanks.

Hello Miguel!

As a learning exercise you might be asking how to train a neural
network to reproduce the function `x^2` without building into it by
hand any knowledge of that specific function.

One of the interesting and important features of neural networks
is that their linear layers plus non-linear activations can be used
to reproduce / approximate many interesting functions. See, for
example, the “Universal approximation theorem” wikipedia article.

I haven’t experimented with this in particular, but you might try
training a network like (just making some something up):

``````model = torch.nn.Sequential (
torch.nn.Linear (1, 50),
torch.nn.Tanh(),
torch.nn.Linear (50, 50),
torch.nn.Tanh(),
torch.nn.Linear (50, 1)
)
``````

`torch.nn.MSELoss` would be the appropriate loss function.

(Your inputs would be your various values of `x`, e.g., your
[…- 2, -1,0,1,2 …]. Your targets would be the corresponding `x^2`.)

You could also try other activations, e.g., `torch.nn.ReLU`, and
wider / narrower or more / fewer hidden layers.

If you get such a network working for `x^2` it would be informative
to retrain it (from scratch) on something like `abs (x)^3`.

Good luck.

K. Frank

1 Like