Strange behaviour in Pytorch Net

Hi,

I’ve been trying to create my first NN with Pytorch. I want to model a simple univariate Linear Regression.

However, my results are strange, if not saying erronous or weirds. This is the notebook (feel free to modify it)

The NB’s name is (Pytorch NN) Linear Regression

I watched Janani Ravi’s introductory courses about Pytorch (at the same time with fast AI v3 course)


In both courses, she uses almost the same net design and hyperparams as I use, but my result are … I don’t know how to describe them :frowning:

Pls, help me figure out what is going on here

Best regards

Jonathan

I would recommend to scale down the problem a bit and try to overfit a small data snippet (e.g. just 10 samples) to make sure your training code doesn’t have any obvious errors.
If that’s not working out of the box, you could play around with some hyperparameters.

Let us know, if you get stuck. :wink:

Thanks @ptrblck … I’ll do it.

Here I have my NB’s updated and working all of them. What I realized, is that when I try a higher LR, my seems to stop learning (the plots show that situation)

Do you have any resource where I can learn from the following topics?

  1. How many layer and neurons to use in certain context?
  2. Which activaton function should I use en each layer?
  3. What is the rationale of using certain activation function on a layer?

Basically, I want to learn how to desing a NN.

Best regard my friend

To learn more about these points you could: