Hi All,
With referred official documentation of pytorch. When implementing an autoencoder usually, we have to define a model class inherited from nn module. and parse the model parameters to an optimize. But is there anywhy to implement a function to get initialized parameters with provided visible size and hidden size?
Here is a sample code below:
W1, b1, W2, b2 = get_initialized_vars(visibleSize, hiddenSize)
optimizer = torch.optim.Adam([W1, b1, W2, b2], lr = learning_rate)
In function get_initialized_vars will return the parameters and will be parsed to Adam optimizer.