Strange behavior CPU utilization when new a tensor

import torch

while True:
a = torch.zeros(100000)

This is my test code for monitoring CPU utilization, when I monitor top command I see this process is consuming 100% cpu (one full core), which is pretty reasonable.

However, when I change 100000 to 200000, TOP shows that this process is consuming 400% cpu resource (4 full cores). I don’t know how zeros implement it and why it will consume so much cpu resource.

Hi,

torch.zeros() consumes some cpu because it has to zero out the memory.

The difference that you see between size of 100000 and 200000 is because for big tensors, pytorch automatically uses multi-process computations to speed up the computation. You can use the torch.set_num_threads() function to control how many cores will be used for big tensors.