High CPU when transforming image(ndarray) to tensor


    def __getitem__(self, index):
        datafiles = self.root + self.train_data[index][0]
        image = Image.open( datafiles ).convert('RGB')
        transform_ = self.get_compose(max(image.size))
        label = self.train_data[index][1]
        image = transform_(image)

        return image, label

When I use this in Dataset and run scripts, I found that it uses 800+% CPU. But when I delete the image = transform_(image), and use code below

    a = torch.zeros( (3,224,224))
    def __getitem__(self, index):
        datafiles = self.root + self.train_data[index][0]
        image = Image.open( datafiles ).convert('RGB')
        transform_ = self.get_compose(max(image.size))
        label = self.train_data[index][1]
        image = a

        return image, label

,CPU become 100+%. And to my surprise, when I assign the image a new tensor every time, CPU back to 800+%:

    def __getitem__(self, index):
        datafiles = self.root + self.train_data[index][0]
        image = Image.open( datafiles ).convert('RGB')
        transform_ = self.get_compose(max(image.size))
        label = self.train_data[index][1]
        image = torch.zeros( (3,224,224))

        return image, label

I use cuda when running demo
If you have any ideas, please reply.Thank you