-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AutoModel Notebooks #12013
base: main
Are you sure you want to change the base?
AutoModel Notebooks #12013
Changes from all commits
348fe3f
29faebc
696fe52
8030926
75d0f2b
00d3781
9a83705
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,288 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"id": "a45b25c3-08b2-4a7e-b0cd-67293f15c307", | ||
"metadata": {}, | ||
"source": [ | ||
"# Optimizing Hugging Face Models with Parameter Efficient Fine-Tuning (PEFT)\n", | ||
"\n", | ||
"NeMo 2.0 enables users to perform Supervised Fine-Tuning (SFT) and Parameter Efficient Fine-Tuning (PEFT) using Hugging Face (HF) large language models (LLMs). It utilizes HF's auto classes to download and load transformer models, and wraps these models as Lightning modules to execute tasks like SFT and PEFT. The goal of this feature is to provide day-0 support for the models available in HF.\n", | ||
"\n", | ||
"[AutoModel](https://huggingface.co/docs/transformers/en/model_doc/auto) is the generic model class that is instantiated as one of the model classes from the library when created with the from_pretrained() class method. There are many AutoModel classes in HF, each covering a specific group of transformer model architectures. The AutoModel class primarily loads the base transformer model that converts embeddings to hidden states. For example, a specific AutoModel class like AutoModelForCausalLM includes a causal language modeling head on top of the base model.\n", | ||
"\n", | ||
"NeMo 2.0 includes wrapper classes for these HF auto model classes, making them runnable in NeMo pretraining, SFT, and PEFT workflows by converting them into Lightning modules. Due to the large number of AutoModel classes, NeMo 2.0 currently includes only the widely used auto classes.\n", | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
"\n", | ||
"In this notebook, we will demonstrate a PEFT training example on how to perform PEFT with Hugging Face LLMs to make the models more performant on a specific task. We will focus on the models that can be loaded using the HF's `AutoModelForCausalLM` class.\n", | ||
"\n", | ||
"<font color='red'>NOTE:</font> Due to the limitations of the Jupyter Notebook, the example in this notebook works only on a single GPU. However, if you move the code to a script, you can run it on multiple GPUs. If you are interested in running a multi-GPU example using the Jupyter Notebook, please check the SFT example in NeMo-Run." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "63a50bad-f356-4076-8c5c-66b4481029dc", | ||
"metadata": {}, | ||
"source": [ | ||
"## Step 1: Import Modules and Prepare the Dataset" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "28e16913-6a08-4ad8-835e-311fbb5af01d", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from functools import partial\n", | ||
"\n", | ||
"import fiddle as fdl\n", | ||
"import lightning.pytorch as pl\n", | ||
"from lightning.pytorch.loggers import WandbLogger\n", | ||
"from torch.utils.data import DataLoader\n", | ||
"\n", | ||
"from nemo import lightning as nl\n", | ||
"from nemo.collections import llm\n", | ||
"from nemo.lightning.pytorch.callbacks import JitConfig, JitTransform\n", | ||
"from nemo.lightning import NeMoLogger" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "5cfe3c7d-9d36-47d2-9107-361025d175a0", | ||
"metadata": {}, | ||
"source": [ | ||
"We will use the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) dataset which is a reading comprehension dataset, consisting of questions and answers pairs. The SquadDataModule in NeMo 2.0 provides the dataloaders for the SQuAD dataset. " | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "3fc6a132-688e-4ad3-94ae-557e57ab77cb", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"class SquadDataModuleWithPthDataloader(llm.SquadDataModule):\n", | ||
" \"\"\"Creates a squad dataset with a PT dataloader\"\"\"\n", | ||
"\n", | ||
" def _create_dataloader(self, dataset, mode, **kwargs) -> DataLoader:\n", | ||
" return DataLoader(\n", | ||
" dataset,\n", | ||
" num_workers=self.num_workers,\n", | ||
" pin_memory=self.pin_memory,\n", | ||
" persistent_workers=self.persistent_workers,\n", | ||
" collate_fn=dataset.collate_fn,\n", | ||
" batch_size=self.micro_batch_size,\n", | ||
" **kwargs,\n", | ||
" )\n", | ||
"\n", | ||
"\n", | ||
"def squad(tokenizer, mbs=1, gbs=2) -> pl.LightningDataModule:\n", | ||
" \"\"\"Instantiates a SquadDataModuleWithPthDataloader and return it\n", | ||
"\n", | ||
" Args:\n", | ||
" tokenizer (AutoTokenizer): the tokenizer to use\n", | ||
"\n", | ||
" Returns:\n", | ||
" pl.LightningDataModule: the dataset to train with.\n", | ||
" \"\"\"\n", | ||
" return SquadDataModuleWithPthDataloader(\n", | ||
" tokenizer=tokenizer,\n", | ||
" seq_length=512,\n", | ||
" micro_batch_size=mbs,\n", | ||
" global_batch_size=gbs,\n", | ||
" num_workers=0,\n", | ||
" dataset_kwargs={\n", | ||
" \"sanity_check_dist_workers\": False,\n", | ||
" \"get_attention_mask_from_fusion\": True,\n", | ||
" },\n", | ||
" )" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "a23943ee-ffa1-497d-a395-3e4767271341", | ||
"metadata": {}, | ||
"source": [ | ||
"## Step 2: Set Parameters and Start the PEFT with a HF Model\n", | ||
"\n", | ||
"Now, we will set some of the important variables, including the HF model name, maximum steps, number of GPUs, etc. You can find the details of these parameters below.\n", | ||
"- `model_name`: Pre-trained HF model or path of a HF model.\n", | ||
"- `strategy`: Distributed training strategy such as DDP, FSDP, etc. \n", | ||
"- `devices`: Number of GPUs to be used in the training.\n", | ||
"- `max_steps`: Number of steps in the training.\n", | ||
"- `wandb_project`: wandb project.\n", | ||
"- `use_torch_jit`: Enable torch jit or not.\n", | ||
"- `ckpt_folder`: Path for the checkpoins.\n", | ||
"\n", | ||
"All popular models, including Llama, GPT, Gemma, Mistral, Phi, and Qwen, are supported. After running this workflow, please select another HF model and rerun the notebook with that model. Ensure the chosen model fits within your GPU(s) memory." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "3780a047-febb-4d97-a59a-99d8ee036332", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# In order to use the models like Llama, Gemma, you need to ask for permission on the HF model page and then pass the HF_TOKEN in the next cell.\n", | ||
"# model_name = \"google/gemma-2b\" # HF model name. This can be the path of the downloaded model as well.\n", | ||
"model_name = \"meta-llama/Llama-3.2-1B\" # HF model name. This can be the path of the downloaded model as well.\n", | ||
"strategy = \"auto\" # Distributed training strategy such as DDP, FSDP2, etc.\n", | ||
"max_steps = 100 # Number of steps in the training loop.\n", | ||
"accelerator = \"gpu\"\n", | ||
"wandb_project = None\n", | ||
"use_torch_jit = False # torch jit can be enabled.\n", | ||
"ckpt_folder=\"/opt/checkpoints/automodel_experiments/\" # Path for saving the checkpoint." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "6966670b-2097-47c0-95f2-edaafab0e33f", | ||
"metadata": {}, | ||
"source": [ | ||
"Some models have gated access. If you are using one of those models, you will need to obtain access first. Then, set your HF Token by running the cell below." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "439a3c6a-8718-4b49-acdb-e7f59db38f59", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import os\n", | ||
"os.environ[\"HF_TOKEN\"] ='<HF_TOKEN>'" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "7cd65e5e-93fa-4ea0-b89d-2f48431b725c", | ||
"metadata": {}, | ||
"source": [ | ||
"After setting some parameters, we can start the PEFT training workflow. Although the PEFT workflow with HF models/checkpoints differs slightly from workflows with NeMo models/checkpoints, we still use the same NeMo 2.0 API. The main difference is the model we pass into the fine-tune API.\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "d3578630-05b7-4a8c-8b5d-a7d9e847f17b", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"wandb = None\n", | ||
"if wandb_project is not None:\n", | ||
" model = '_'.join(args.model.split('/')[-2:])\n", | ||
" wandb = WandbLogger(\n", | ||
" project=wandb_project,\n", | ||
" name=f'{model}_dev{devices}_strat_{strategy}',\n", | ||
" )\n", | ||
"\n", | ||
"callbacks = []\n", | ||
"if use_torch_jit:\n", | ||
" jit_config = JitConfig(use_torch=True, torch_kwargs={'dynamic': False}, use_thunder=False)\n", | ||
" callbacks = [JitTransform(jit_config)]\n", | ||
"\n", | ||
"if strategy == 'fsdp2':\n", | ||
" astrategy = nl.FSDP2Strategy(data_parallel_size=devices, tensor_parallel_size=1)\n", | ||
"\n", | ||
"llm.api.finetune(\n", | ||
" model=llm.HFAutoModelForCausalLM(model_name=model_name),\n", | ||
" data=squad(llm.HFAutoModelForCausalLM.configure_tokenizer(model_name), gbs=1),\n", | ||
" trainer=nl.Trainer(\n", | ||
" devices=1,\n", | ||
" max_steps=max_steps,\n", | ||
" accelerator=\"gpu\",\n", | ||
" strategy=strategy,\n", | ||
" log_every_n_steps=1,\n", | ||
" limit_val_batches=0.0,\n", | ||
" num_sanity_val_steps=0,\n", | ||
" accumulate_grad_batches=1,\n", | ||
" gradient_clip_val=1.0,\n", | ||
" use_distributed_sampler=False,\n", | ||
" logger=wandb,\n", | ||
" callbacks=callbacks,\n", | ||
" precision=\"bf16\",\n", | ||
" ),\n", | ||
" optim=fdl.build(llm.adam.pytorch_adam_with_flat_lr(lr=1e-5)),\n", | ||
" log=NeMoLogger(log_dir=ckpt_folder, use_datetime_version=False),\n", | ||
" peft=llm.peft.LoRA(\n", | ||
" target_modules=['*_proj'],\n", | ||
" dim=8,\n", | ||
" ),\n", | ||
")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"id": "67e6e4d4-8e0c-4507-b386-22c3d63097c1", | ||
"metadata": {}, | ||
"source": [ | ||
"## Step 3: Generate Output with the HF Pipeline\n", | ||
"\n", | ||
"Once the PEFT training is completed, you can generate output using HF's APIs to see the quality of the outputs. Once the PEFT workflow is completed, PEFT checkpoint is saved under the folder defined in `ckpt_folder` variable. After the first run, the new checkpoint will be saved a folder with the name `default/checkpoints/default--None=0.0000-epoch=0-consumed_samples=0/weights/`. If you run this notebook, you will see multiple checkpoints in the same place." | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Once the PEFT training is completed, you can generate output using HF's APIs to see the quality of the outputs. The PEFT checkpoint will be saved in a folder defined by the |
||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "0a118868-8c6e-44ad-9b2d-3be3994a093b", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import torch\n", | ||
"import os\n", | ||
"from pathlib import Path\n", | ||
"from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer\n", | ||
"from peft import PeftModel\n", | ||
"\n", | ||
"model_name = \"meta-llama/Llama-3.2-1B\"\n", | ||
"ckpt_folder=\"/opt/checkpoints/automodel_experiments/\"\n", | ||
"peft_checkpoint = Path(ckpt_folder) / \"default/checkpoints/default--None=0.0000-epoch=0-consumed_samples=0/weights/\"\n", | ||
"\n", | ||
"model = AutoModelForCausalLM.from_pretrained(model_name)\n", | ||
"model.load_adapter(peft_checkpoint, adapter_name=\"adapter_1\")\n", | ||
"model.set_adapter(\"adapter_1\")\n", | ||
"\n", | ||
"pipe = pipeline(\n", | ||
" \"text-generation\",\n", | ||
" model=model,\n", | ||
" tokenizer=AutoTokenizer.from_pretrained(model_name),\n", | ||
" torch_dtype=torch.bfloat16,\n", | ||
" device_map=\"auto\",\n", | ||
" device=0,\n", | ||
")\n", | ||
"\n", | ||
"pipe(\"The key to life is\")" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"id": "e98418e8-1d94-4167-be81-5776a5231e68", | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.12.3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 5 | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.