Runtime Error on local rank using torch library

I am trying to finetune the llama model using the torch library with the cmd - tune run lora_finetune_single_device --config llama2/7B_lora_single_device epochs=1

I get an Error - “RuntimeError: The local rank is larger than the number of available GPUs.”

I am on a Mac with Intel Core i7 and 32GB RAM. Please advise.