Using hugginface pipeline in pytorch mps device

Hi
i want to run pipeline abstract for zero-shot-classification task on the mps device. Here is my code

pipe = pipeline('zero-shot-classification', device = mps_device)
seq = "i love watching the office show"
labels = ['negative', 'positive']
pipe(seq, labels)

The error generated is

RuntimeError: Placeholder storage has not been allocated on MPS device!

Which my guess is because seq is on my cpu and not mps. How can i fix this ?
Is there a way to send seq to the mps device so that i can pass it to the pipe for inference?

Thanks

Here is the simplest solution I could come up with to address the problem (I was getting the same error and happened across your question):

import os
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"

import torch
from transformers import pipeline

mps_device = torch.device("mps" if torch.backends.mps.is_available() else "cpu")

pipe = pipeline('zero-shot-classification', device = mps_device)
if torch.backends.mps.is_available():
  print("Loading model into MPS device...")
  pipe.model = pipe.model.to(mps_device)

seq = "i love watching the office show"
labels = ['negative', 'positive']
pipe(seq, labels)