To save and reload a model you should serialize its state_dict, which contains all parameters and buffers, as explained in the serialization docs.
You could append all outputs or predictions in a list after detaching them and just store the list.
Detaching is necessary, as otherwise the whole computation graph will be added to the list and thus your memory usage might grow:
outputs = []
for data in loader:
output = model(data)
outputs.append(output.detach().cpu())
If you are wrapping the code in a with torch.no_grad() block, as is common during evaluation, you don’t need to detach the outputs.