Pytorch torch nn equivalent of tensorflow (keras) dense layers?

The example I cited seems to have two layers of dense having size 128, 10 respectively.
But in the torch example, user defines linear wiht 128, 10 with one linear.

Now I visited more elaborate definition:
pytorch nn linear layer inputs:

CLASS torch.nn. Linear (in_features , out_features , bias=True , device=None , dtype=None )

keras dense:

tf.keras.layers.Dense(    units, activation=None, use_bias=True,    kernel_initializer='glorot_uniform',    bias_initializer='zeros', kernel_regularizer=None,    bias_regularizer=None, activity_regularizer=None, kernel_constraint=None,    bias_constraint=None, **kwargs)

Here in keras.dense layer: units is defined as: “Positive integer, dimensionality of the output space.”
Here the keyword is output: does it mean this parameter is equivalent to out_features in pytorch nn? Then what should be “in_features*”?