Keras to Pytorch for Autoencoder

Hi could anyone please convert below Keras code to Pytorch one?

input layer

input_layer = Input(shape=(X.shape[1],))

encoding part

encoded = Dense(100, activation=‘tanh’, activity_regularizer=regularizers.l1(10e-5))(input_layer)
encoded = Dense(50, activation=‘relu’)(encoded)

decoding part

decoded = Dense(50, activation=‘tanh’)(encoded)
decoded = Dense(100, activation=‘tanh’)(decoded)

output layer

output_layer = Dense(X.shape[1], activation=‘relu’)(decoded)

autoencoder = Model(input_layer, output_layer)
autoencoder.compile(optimizer=“adadelta”, loss=“mse”)

x = data.drop([“Class”], axis=1)
y = data[“Class”].values

x_scale = preprocessing.MinMaxScaler().fit_transform(x.values)
x_norm, x_fraud = x_scale[y == 0], x_scale[y == 1]

autoencoder.fit(x_norm[0:2000], x_norm[0:2000], batch_size = 256, epochs = 10, shuffle = True, validation_split = 0.20)

hidden_representation = Sequential()
hidden_representation.add(autoencoder.layers[0])
hidden_representation.add(autoencoder.layers[1])
hidden_representation.add(autoencoder.layers[2]).

=======here I am not getting specially t his part:
hidden_representation = Sequential()
hidden_representation.add(autoencoder.layers[0])
hidden_representation.add(autoencoder.layers[1])
hidden_representation.add(autoencoder.layers[2]).