What is the difference between the kernels of kaggle and googlecolab?

I fell in love with machine learning, the sun.
I have one question.
Why is it that I can run it in kaggle’s kernel, but not in googlecolab?
I don’t know how to make it work.
Please help me.

By the way, I am using timm’s efficientnet.
![file:///Users/SN/Pictures/Screenshot_2021-08-22%2013.36.07_QLL1wr.png](url to embed)

CQT kernels created, time used = 0.0308 seconds
/usr/local/lib/python3.7/dist-packages/nnAudio/utils.py:326: SyntaxWarning: If fmax is given, n_bins will be ignored
warnings.warn(‘If fmax is given, n_bins will be ignored’,SyntaxWarning)

ValueError Traceback (most recent call last)
in ()
5 plt.figure(figsize=(16,12))
----> 6 image, label = train_dataset[i]
8 plt.imshow(image[0])

4 frames
/usr/local/lib/python3.7/dist-packages/albumentations/pytorch/transforms.py in apply(self, img, **params)
91 def apply(self, img, **params): # skipcq: PYL-W0613
—> 92 return torch.from_numpy(img.transpose(2, 0, 1))
94 def apply_to_mask(self, mask, **params): # skipcq: PYL-W0613

ValueError: axes don’t match array

Based on the error message numpy isn’t able to transpose the numpy array, which might happen e.g. if you are now loading grayscale images (and were loading RGB images previously), so you would have to make sure the same image format is used in both setups.

This post was flagged by the community and is temporarily hidden.




def get_transforms(*, data):

if data == 'train':
    return A.Compose([

elif data == 'valid':
    return A.Compose([

Is there a solution to this that would use this? I’m really sorry for the late reply.

If you add the above transform, you can do it in kaggle eda, but not in googlecolab.
Without it, it is possible.