MNIST autograd profiling

(Dhakshiin) #1

Hi,

In the code https://github.com/pytorch/examples/blob/master/mnist/main.py I wanted to estimate the profiling information for each layer in my model.
Is the placement of the profiler function correct.

def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()

	***with torch.autograd.profiler.profile() as prof:***

*** output = model(data)***
*** loss = F.nll_loss(output, target)***
*** loss.backward()***
*** print(prof)***

	optimizer.step()
	if batch_idx % args.log_interval == 0:
		print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
		epoch, batch_idx * len(data), len(train_loader.dataset),
		100. * batch_idx / len(train_loader), loss.item()))

Do I have to make "requires_grad=True" anywhere in the code before running the profiler