Unstructured global pruning with importance scores is not working

I am trying to prune a model using importance scores. As a test here is what I did:

import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data import Dataset, DataLoader
import torch.nn.utils.prune as prune

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model = torchvision.models.vit_b_32(weights=torchvision.models.ViT_B_32_Weights.IMAGENET1K_V1)
model.to(device)
model.eval()

random_activation = {}

for k, v in dict(model.named_modules()).items():
    if ((len(list(v.children())) == 0) and (v._get_name().lower().startswith('linear'))):
        random_activation[k] = torch.rand(v.weight.size())

parameter_to_prune = [
    (v, "weight") 
    for k, v in dict(model.named_modules()).items()
    if ((len(list(v.children())) == 0) and (v._get_name().lower().startswith('linear')))
]

prune.global_unstructured(parameter_to_prune, prune.L1Unstructured, importance_scores = random_activation, amount = 0.3)

However, I am getting the same accuracy as if I use importance_scores = None. Am I using this incorrectly?