Is there any Pytorch tutorial official about LLM?

Hello, I’m new to Pytorch.

LLM and generation are hot topics recently. I tried looking for official tutorials, examples of LLM, LoRA, GPT… on the Pytorch homepage but there was nothing.
The Pytorch blog has updated information about LLM but it is a ‘blog’:

Perhaps I was flawed in my search for information.
Do you have any official documents or plans for LLM and generative?

Thank you very much.

For a lucid, minimal example I would recommend checking out nanoGPT: GitHub - karpathy/nanoGPT: The simplest, fastest repository for training/finetuning medium-sized GPTs.

1 Like

LLMs are generally handled by another library built on PyTorch called Hugging Face. You can access tutorials here:

1 Like

Llama 2 is an open source llm developed by meta. Llama is the foundation for the multi-llm model. So I’m looking for this tutorial from pytorch.

  • Information about hardware and requirements.
  • How to build and train the model?
  • How to prepare dataset?
  • How to run inference?

By the way, thank you so much. I will try with Hugging Face.

Regarding Llama 2, you can find the model cards here: meta-llama (Meta Llama 2)

It comes in various sizes. What you’ll probably want to do is get a pretrained set of weights and finetune a set of LoRA vectors ontop of those. Assuming your use case is in English.

Your GPU size will determine which model want to use.