Some of CPU cores don't work when I increase training epochs

Hi, I’m using profiling tool VTune Amplifier. What I’m interested in is parallel programming, both in thread level and instruction levels. The number of cores in my server is 16, and it supports AVX instructions. (not support AVX2, AVX512)

lscpu gives:

Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 62
Model name: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz
Stepping: 4
CPU MHz: 1200.433
CPU max MHz: 3400.0000
CPU min MHz: 1200.0000
BogoMIPS: 5201.92
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 20480K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d

I’m profiling resnet18 training code below. I don’t copy the code of printing loss and accuracy.

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import torchvision.models as models

transform_train = transforms.Compose([
    transforms.RandomCrop(32, padding=4),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),

#transform_test = transforms.Compose([
#    transforms.ToTensor(),
#    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform_train)
trainloader =, batch_size=128,
                                          shuffle=True, num_workers=0)
#testset = torchvision.datasets.CIFAR10(root='./data', train=False,
#                                       download=True, transform=transform_test)
#testloader =, batch_size=100,
#                                         shuffle=False, num_workers=2)

# get some random training images
dataiter = iter(trainloader)
images, labels =

# define network
net = models.resnet18(pretrained=False)

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4)

for epoch in range(15):  # loop over the dataset multiple times
    running_loss = 0.0

    for i, data in enumerate(trainloader, 0):

        # get the inputs; data is a list of [inputs, labels]
        inputs, labels = data
        # zero the parameter gradients

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)

        # calculate loss
        running_loss += loss.item()

In my profiling result, I found that AVX dynamic codes (which are hotspots in my code) are mostly executed by 16 threads. (Total 48~49 threads are running, but 16 of them are terminated before training, and the other 16 of them are executing other codes) I have some interesting results. As I increase the number of training loops, some of CPU doesn’t work. I attached result images below with google drive link. Files numbered 1~4 are for epoch 5, 15, 25, and 50, respectively.

VTune Results

The CPU Utilization metrics are 58.3%, 62.1%, 53%, and 49.4%, respectively. I think I have to mention some note. For epoch 50, I’ve profiled it twice because of the extremely low metric at the first time. It was 31.1%. The result image of this is in the link above, with the file name numbered 5.

Is there anyone who could give me some insight about these results?