Running Llama2 locally on my device (MacBookPro Intel)

Hello I am Trying to run Llama2 with 70B-chat model but receive a PyTorch error or better said warning:

(textgen) maric@MacBook-Pro-7 llama-main % torchrun --nproc_per_node 8 example_chat_completion.py
–ckpt_dir ~/llama-main/llama-2-70b-chat
–tokenizer_path ~/llama-main/tokenizer.model
–max_seq_len 512 --max_batch_size 6
[2023-10-28 14:44:25,029] torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
[2023-10-28 14:44:25,065] torch.distributed.run: [WARNING]
[2023-10-28 14:44:25,065] torch.distributed.run: [WARNING] *****************************************
[2023-10-28 14:44:25,065] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2023-10-28 14:44:25,065] torch.distributed.run: [WARNING] *****************************************

Did anyone face the same issue and how to solve it?
I wanted to run Llama2 locally but facing error after error.