TorchLens: New package for feature extraction and visualization of arbitrary models

I recently released a new package, TorchLens, for extracting hidden layer activations and visualizing arbitrary PyTorch models. Here’s the bullet point pitch for anyone interested:

– It’s designed to work for any arbitrary PyTorch model whatsoever, not just a predefined library (but if I’ve missed any edge cases, let me know)
– It can extract activations from and visualize not just PyTorch module outputs, but the results of any PyTorch function call involved in the forward pass.
– It works not just for static computational graphs, but also for dynamic computational graphs (unlike the torchvision feature_extraction module, which only works for static graphs).
– In addition to layer activations, it gives you a bunch of metadata about both the overall model and about each individual layer. The goal is to give you every last bit of information you could possibly want to know about the model.
– It can identify and visualize recurrent feedback loops in the model–this is done both by finding modules that apply the same parameters multiple times, and by finding repeated adjacent loops of the same operations.
– The visualization can show hierarchically nested modules, and allows you to specify how many levels deep you want to show.
– If there’s “if-then” logic in the forward pass, this is visualized as well.
– In addition to the forward pass activations, it can also extract the gradients from a backward pass.

I am trying to make this as useful a tool as possible for folks, so please do let me know if you ever have any wishlist items or grievances :] Hope this is useful for folks.