Autograd profiler total_average() calculation += exception

I had written a layer on top of the autograd profiler to collect more custom ranges of times, and with the upgrade to Pytorch 1.2 my usage is no longer working.

When a designated timing range begins, I have the following snippet (among a longer block in my codebase):

records = torch.autograd._disable_profiler()
events_average = EventList(parse_cpu_trace(records)).total_average()
cpu_time = events_average.cpu_time_total / 1000
cuda_time = events_average.cuda_time_total / 1000

When it runs, the .total_average() call in pytorch, at this line (total_stat += evt) crashes with the Exception: TypeError: unsupported operand type(s) for +=: 'FunctionEventAvg' and 'FunctionEvent'. I noticed FunctionEventAvg doesn’t have a __add__ override, but used to. Seems like it changed with this commit.

I modified the source to change += to .add and it seems to work now. Should FunctionEventAvg have an __iadd__ method like it used to? Or should the += be changed to .add? Am I using it wrong?


Thanks for reporting and tracking it down in the commit!
I’ve never used the EventList, but if your use case is a “standard” one (not some kind of hack), would you mind creating a GitHub issue and liking your topic there?

If I’m understanding the APIs correctly, my use case should be a “standard” one. I just filed the GitHub issue: Thanks for your help!

1 Like

I had run same question, and can solve it by adding iadd , in which call self.add(other)