I had written a layer on top of the autograd profiler to collect more custom ranges of times, and with the upgrade to Pytorch 1.2 my usage is no longer working.
When a designated timing range begins, I have the following snippet (among a longer block in my codebase):
records = torch.autograd._disable_profiler()
events_average = EventList(parse_cpu_trace(records)).total_average()
cpu_time = events_average.cpu_time_total / 1000
cuda_time = events_average.cuda_time_total / 1000
When it runs, the .total_average() call in pytorch, at this line (total_stat += evt) crashes with the Exception: TypeError: unsupported operand type(s) for +=: 'FunctionEventAvg' and 'FunctionEvent'. I noticed FunctionEventAvg doesn’t have a __add__ override, but used to. Seems like it changed with this commit.
I modified the source to change += to .add and it seems to work now. Should FunctionEventAvg have an __iadd__ method like it used to? Or should the += be changed to .add? Am I using it wrong?
Thanks!