Normalizing non-image tensors

I have a numpy ndarray containing 4 floats in 1-dim, looks like [a, b, c, d] for example. I am trying to use the transforms functions in order to normalize my data. It currently looks like this

transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(get_mean(), get_std())
])

and in my custom dataset class, I call the transformation as such in the getitem()

 return self.transform(result)

However, I get the following error message

raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))
ValueError: pic should be 2/3 dimensional. Got 1 dimensions.

I understand that torchvision is mainly meant for image tensors, but it seems like there should not be a difference since both are just tensors of eventual numerical data. Is there anyway that I can make this work, or do I have to normalize manually. Thank you for your time and assistance.

There isn’t a difference if you’re using an “image tensor”. The error is because the function just expects a 2 or 3 dimensional tensor

so is there no way to feed in a 1-dim tensor?