Analyzing the inference of a of a model

So im new to pytorch
But im wondering is there is a way to gain information on a model during an inference like what neurons were a activated or a history of what each output was at each layer as the data moved through the model?
Is that beyond the scope of what of what pytorch can do? if not is it beyond the scope of someone with an amateurish knowledge of machine learning can do?

Im far from experienced in these kinds of things but im really curious to see what is happening under the hood.

Yes, it’s possible to check the intermediate activations e.g. via forward hooks as described here.