I have define some variables inside forward pass. Input is 3-dimension and its different channel is assigned to the different variable.
One channel is assigned as self.channel_1=self.input[:,0,:,:]
Second and third channel is assigned after initialization like below.
self.channel_2 = torch.ones(1,1,256,256).to(self.device)
self.channel_3 = torch.ones(1,1,256,256).to(self.device)
self.channel_2 = self.input[0,1,:,:]
self.channel_3= self.input[0,2,:,:]
As epoch is progressed, training slow down.
(1) What is the difference between two assignment above? Is it the cause of slow down?
(2) Can i use “torch.cuda.empty_cache()” at end of epoch to speed up next epoch?
I am using torch 1.4.0.