I am trying to convert batches of grayscale images to 0-1. At the moment, I am using transforms.ToTensor to do so but it is too slow as I found that it occupied 80% of the running time after doing some profiling on the code. I am applying the transform on each image one by one without using DataLoader because I am training a dqn with a dynamic data pool. Are there any faster ways to do the same thing? Right now, it is extremely slow as my per image size is 1024*1024. Thank you very much for your help in advance. I have provided a snippet below
#Random transition batch is taken from experience replay memory
transitions = self.memory.sample(self.batch_size)
batch_state = []
batch_action = []
batch_reward = []
batch_state_next_state = []
batch_done = []
for t in transitions:
bs, ba, br, bsns, bd = t
bs = transform_img_for_model(bs)
if(self.transforms is not None):
bs = self.transforms(bs)
batch_state.append(bs)
batch_action.append(ba)
batch_reward.append(br)
bsns = transform_img_for_model(bsns)
if(self.transforms is not None):
bsns = self.transforms(bsns)
batch_state_next_state.append(bsns))
batch_done.append(bd)
batch_state = Variable(torch.stack(batch_state).cuda(async=True), volatile=True)
batch_action = torch.FloatTensor(batch_action).unsqueeze_(0)
batch_action = batch_action.view(batch_action.size(1), -1)
batch_action = Variable(batch_action.cuda(async=True), volatile=True)
batch_reward = torch.FloatTensor(batch_reward).unsqueeze_(0)
batch_reward = batch_reward.view(batch_reward.size(1), -1)
batch_reward = Variable(batch_reward.cuda(async=True), volatile=True)
batch_next_state = Variable(torch.stack(batch_state_next_state).cuda(async=True), volatile=True)
def transform_img_for_model(image_array):
image_array_copy = image_array.clone()
image_array_copy.unsqueeze_(0)
image_array_copy = image_array_copy.repeat(3, 1, 1)
return image_array_copy