How do I design this network? (the formula of network is in the context)

I have downloaded a program from github: https://github.com/pytorch/examples/tree/master/regression .
The network has a single fully-connected layer to fit a 4th degree polynomial. Then I want to design a two layers network with the same construct to fit a more complicated function. But it could not work completely. Since I am a newer, could someone help me and give me some suggestion?

Firstly, let me say thank you to the author.

# !/usr/bin/env python
from __future__ import print_function
from itertools import count
import torch
import torch.autograd
import torch.nn.functional as F
from torch.autograd import Variable

# get the coefficient of the polynomial
POLY_DEGREE = 4
W_target = torch.randn(POLY_DEGREE, 1) * 5
b_target = torch.randn(1) * 5

# Builds features i.e. a matrix with columns [x, x^2, x^3, x^4]  
def make_features(x):
    x = x.unsqueeze(1)
    return torch.cat([x ** i for i in range(1, POLY_DEGREE+1)], 1)

    # generate polynomial
def f(x):
"""Approximated function."""
    return x.mm(W_target) + b_target[0]

# Builds a batch i.e. (x, f(x)) pair.
def get_batch(batch_size=32):
    random = torch.randn(batch_size)
    x = make_features(random)
    y = f(x)
    return Variable(x), Variable(y)

# Define model: it is error!!!!!!
fc = torch.nn.Linear(W_target.size(0), 1)
x1 = make_features(fc.eval())
y1 = f(x1)
fc1 = torch.nn.Linear(W_target.size(0), 1)

for batch_idx in count(1):
# Get data with the Variable type
    batch_x, batch_y = get_batch()

    # Reset gradients
    fc1.zero_grad()

    # Forward pass
    output = F.smooth_l1_loss(fc1(y1), batch_y)

    # Backward pass
    output.backward()

    # Apply gradients
    for param in fc.parameters():
        param.data.add_(-0.1 * param.grad.data)

    # Stop criterion
    loss = output.data[0]
    print(batch_idx, "is: ", loss)
    if loss < 1e-3:
        break

``

if you only want to design a two layers network, it’s very simple

fc = nn.Sequential(
           nn.Linear(W_traget.size(0), 3),
           nn.Linear(3, 1)
)

Thank you very much! While I do not express my idea completely, so I draw a graph, like this:


I input x and get y_pred, the parameters are W and b.

I think the graph could not express the idea. But I hope you could understand my idea.
Should I write a special class like Linear for this?

you can define a nn.module class, then change it in forward, for example

class net(nn.Module):
    def__init__(self):
        super(net, self).__init__()
        self.fc1 = nn.Linear(W_target.size(0), 1)
        self.fc2 = nn.Linear(W_target.size(0), 1)

   def forward(self, x):
        x1 = self.fc1(x)
        y = make_features(x1)
        y_pred = self.fc2(y)
        return y_pred

Hi matilu, not sure if it’s useful, but I made a video on forwardprop/backwardprop, chain rule, and corresponding pytorch code, not sure if it could be useful? There are a few parts, so you can just choose the part(s) that work for you. I’m doing it in context of rnn, but it’s probably generally applicable.

part 1: rnn introduction, forward prop https://www.youtube.com/watch?v=ppJwiv1qtlE
part 2: backprop concepts https://www.youtube.com/watch?v=cxZFt6DJp90
part 3: why do we want to backprop? https://www.youtube.com/watch?v=pSXu7uXXB9I
part 4: backprop maths, chainrule (sort of like your diagram above, maybe? https://www.youtube.com/watch?v=0bjONLIU2No
part 5: just define the python vars gradOutput, gradWeight ,that I’m using https://www.youtube.com/watch?v=6GOA4LqHyP8
part 6: write out backprop by hand, in python, without using autograd https://www.youtube.com/watch?v=3tfRbELS8Zg
part 7: wirting backprop using autograd, one line :slight_smile: https://www.youtube.com/watch?v=g_sER3OJSjA

Dear SherlockLiao and hughperkins, thank you very much! I would like work it out.

Indeed, it worked according the suggestion of SherlockLiao though the net is not stable. (I work out two times for 8 times.)

# Define model: 
class net(nn.Module):
    def __init__(self):
        super(net, self).__init__()
        self.fc1 = nn.Linear(W_target.size(0), 1)
        self.fc2 = nn.Linear(2, 1)

    def forward(self, x):
        x1 = self.fc1(x)
        y = torch.cat([x1 ** i for i in range(1, 3)], 1)
        # print(y)
        y_pred = self.fc2(y)
       return y_pred

net1 = net()