I’m trying to refactor my code and now I’ve discovered that if I replace
torch.nn.CrossEntropyLoss(), my code crashes from a memory error. Negative log likelihood by itself is fine. I’ve been debugging for hours and I have no ideas. Does anyone have any suggestions for what’s causing this?
I know there’s a memory leak from using the following code. I watch the number of non-garbage collected tensors climb:
count = 0 for obj in gc.get_objects(): try: if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): print(type(obj), obj.size()) count += 1