Objective function problem: different between Bayesian framework and L2 regularization in Pytorch

Hi,
I am a beginner in ML and Pytorch, now I am working on a paper about Bayesian framework from MacKay: Bayesian Interpolation.

They introduce an objective function like F = αEw + βEd, where Ew is the sum of the square of weight, Ed is the sum of the square of error and (α,β) are parameters which needs to be settled.
I find that this objective function is quite like the λ in Pytorch’s L2 regularization.

I wonder is there anyone who familiar with these two methods can point out the difference between them.

Thanks