I’ve a Tensor of shape torch.Size([1291162, 28, 28, 1])
. Since this tensor is so big, i decided to take a batch out of it.
yr = x_train[::6400]
print(yr.shape)
This gives back a Tensor of shape
torch.Size([202, 28, 28, 1])
In my train module, i do this
for t in range(2):
y_pred = model(yr.float())
I want to extract such tensors efficiently in the for loop so that it can be fed to the model. Like I want to grab unique yr
like Tensors every time the loop starts and needs to be fed to model accordingly.
You could try to use
for _tensor in my_tensor.split(202):
pred = model(_tensor)
I had another doubt. I initialize my model as model = Model(28)
. My input tensor is [202,1,28,28]
. Now Pytorch expects channel size
to be placed on the second dim. But I cannot pass to my Model because then it says sizes cannot be non negative.
def conv_layer(ni,nf,kernel_size=3,stride=1):
return nn.Sequential(
nn.Conv2d(ni,nf,kernel_size=kernel_size,bias=False,stride=stride,padding=kernel_size//2),
nn.BatchNorm2d(nf,momentum=0.01),
nn.LeakyReLU(negative_slope=0.1,inplace=True)
)
If model(28)
is done, then ni takes the value of 28, but then it treats my Tensor as if it has 28 channels, but it has only 1. So how do i pass [202,1,28,28]
so that my model treats it as 1 channel image`.