Hello guys

Here is my local binary pattern function:

def lbp(x):

`imgUMat = np.float32(x) gray = cv2.cvtColor(imgUMat, cv2.COLOR_RGB2GRAY) radius = 2 n_points = 8 * radius METHOD = 'uniform' lbp = local_binary_pattern(gray, n_points, radius, METHOD) lbp = torch.from_numpy(lbp).long() return lbp`

Here I call lbp function:

input_img = plt.imread(trn_fnames[31])

x = lbp(input_img)

When I use x.shape it is:

torch.Size([600, 600])

Sounds good!!!

But my problem is when I use transforms.Lambda(lbp) in my transform function, my output image is torch.Size([600])

tfms = transforms.Compose([

transforms.Lambda(lbp)])

train_ds = datasets.ImageFolder(trn_dir, transform = tfms)

(train_ds[0][0][0]).shape

torch.Size([600])!!! >>>> my problem

**I need torch.Size([600, 600])**

I also different ways such as this:

tfms = transforms.Compose([

transforms.Lambda(lbp),

transforms.ToPILImage(),

transforms.Resize((sz, sz))])

And I got this error:

TypeError: pic should be Tensor or ndarray. Got <class ātorch.Tensorā>.

I also added

transforms.ToTensor()])

But still have the same error:

TypeError: pic should be Tensor or ndarray. Got <class ātorch.Tensorā>.

Iāll appreciate to your comments please!

Thank you.