BatchNorm1d calls torch.batch_norm, which does not look existing in either scripts or documents

I analyze the behavior of BatchNorm1d by scripts and debugger and come across following issues;

  1. BatchNorm1d goes to def call in nn.Module, which executes self.forward() in nn.Module. Then it goes to def forward() in nn.Module.batchnorm then it calls Functional.batch_norm(). Then it goes to torch.batch_norm.

  2. I tried to see the contents of torch.batch_norm but I can not locate scripts nor documents. Also torch.batch_norm looks running without going that corresponding scripts and without any error when debug and execution. Is there anyone to explain this situation ? Where can I see the scripts ? Does it run properly or any issues behind this case ?

sekigh

1 Like

One additional information. I pointed a cursor at torch.batch_norm() on eclipse debugger and a message " builtin_function_or method:<built-in method batch_norm of type object at xxxxx." shows up. Where do I see document for it and what are returned as a result ?
sekigh

I met the same problem with u. wait for answer

I guess the final call is calling some precompiled C++ code? That python file with type annotations looks a lot like a C++ header file, so my guess is that it kinda works like one, though I do not know the details of communication between the C++ code and the python API. Wouldn’t mind getting some link describing it briefly though.

Thank you for your reply. I can not locate C++code file you mentioned. Could you point at locations of that python file and its associated C++ file if possible. I would like to look into them. Thank you in advance.

I am also wondering the same. If you build from source, running

grep -R "batch_norm" torch/csrc/

in the pytorch repo returned some results. But I also couldn’t find the actual equation that does the normalization.

The CPU implementation of the normalization can be found here. The source file also includes the backward pass, stats updates, etc.

thanks for your answer, how can i debug the cpu code?

You could use a debugger such as gdb.

step from torch.batch_norm into the cpu source code?

Yes, you should be able to set breakpoints via break.