Evaluating Barlow Twins on CIFAR10 datasets using Linear protocol and pretrained weights available

I use the pre-trained weights provided by FAIR for Barlow twins a new self-supervised learning approached. My top1 validation accuracy is almost static(68%) and top5 accuracy is higher(97%) than the official results.

model = torch.hub.load(‘facebookresearch/barlowtwins:main’, ‘resnet50’)
model.fc=nn.Identity()

model.requires_grad_(False).to(device)

classifier = nn.Linear(2048, 10).to(device)

classifier.weight.data.normal_(mean=0.0, std=0.01)

classifier.bias.data.zero_()
criterion = nn.CrossEntropyLoss().to(device)

optimizer = optim.Adam(classifier.parameters(), lr=1e-3, weight_decay=1e-6)![baa|690x373]

and the rest is training loop …(upload://o35oIvJFn8e8dEbEuhV0fkntTjh.png)