Skip to content

Commit

Permalink
add BERT experiment results (rasbt#333)
Browse files Browse the repository at this point in the history
* add BERT experiment results

* cleanup

* formatting
  • Loading branch information
rasbt authored Aug 23, 2024
1 parent ed25778 commit 5643620
Show file tree
Hide file tree
Showing 8 changed files with 626 additions and 116 deletions.
38 changes: 19 additions & 19 deletions ch06/02_bonus_additional-experiments/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ For example,
| 5 | gpt2-small (124M) | pretrained | last | all | longest train ex. (120) | 99.62% | 96.64% | 96.67% | 0.69 min | A100 |
| 6 | gpt2-medium (355M) | pretrained | last | last_block | longest train ex. (120) | 87.50% | 91.28% | 84.67% | 0.75 min | A100 |
| 7 | gpt2-large (774M) | pretrained | last | last_block | longest train ex. (120) | 99.52% | 98.66% | 96.67% | 1.50 min | A100 |
| 8 | gpt2-xl (1558M) | pretrained | last | last_block | longest train ex. (120) | 99.81% | 99.33% | 98.33% | 2.83 min | A100 |
| 8 | gpt2-xl (1558M) | pretrained | last | last_block | longest train ex. (120) | 99.81% | 99.81% | 98.33% | 2.83 min | A100 |
| 9 | gpt2-xl (1558M) | pretrained | last | all | longest train ex. (120) | 100.00% | 98.66% | 98.67% | 8.12 min | A100 |
| 10 | gpt2-small (124M) | random | last | all | longest train ex. (120) | 100.00% | 96.64% | 93.67% | 0.69 min | A100 |
| 11 | gpt2-small (124M) | pretrained | last | LoRA | longest train ex. (120) | 100.00% | 97.32% | 96.67% | 0.75 min | A100 |
Expand All @@ -35,23 +35,23 @@ For example,

You can use the following code to reproduce the experiments:

- Row 1: `python additional-experiments.py`
- Row 2: `python additional-experiments.py --trainable_token_pos first`
- Row 3: `python additional-experiments.py --trainable_layers last_layer`
- Row 4: `python additional-experiments.py --trainable_layers last_two_blocks`
- Row 5: `python additional-experiments.py --trainable_layers all`
- Row 6: `python additional-experiments.py --model_size "gpt2-medium (355M)"`
- Row 7: `python additional-experiments.py --model_size "gpt2-large (774M)"`
- Row 8: `python additional-experiments.py --model_size "gpt2-xl (1558M)"`
- Row 9: `python additional-experiments.py --model_size "gpt2-xl (1558M)"--trainable_layers all`
- Row 10: `python additional-experiments.py --weights random --trainable_layers all`
- Row 11: `python additional-experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 16`
- Row 12: `python additional-experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 8 --model_size "gpt2-xl (1558M)"`
- Row 13: `python additional-experiments.py --context_length "model_context_length"`
- Row 14: `python additional-experiments.py --no_padding --batch_size 1`
- Row 15: `python additional-experiments.py --no_padding --batch_size 1 --accumulation_steps 8`
- Row 16: `python additional-experiments.py --disable_causal_mask`
- Row 17: `python additional-experiments.py --ignore_index 50256`
- Row 1: `python additional_experiments.py`
- Row 2: `python additional_experiments.py --trainable_token_pos first`
- Row 3: `python additional_experiments.py --trainable_layers last_layer`
- Row 4: `python additional_experiments.py --trainable_layers last_two_blocks`
- Row 5: `python additional_experiments.py --trainable_layers all`
- Row 6: `python additional_experiments.py --model_size "gpt2-medium (355M)"`
- Row 7: `python additional_experiments.py --model_size "gpt2-large (774M)"`
- Row 8: `python additional_experiments.py --model_size "gpt2-xl (1558M)"`
- Row 9: `python additional_experiments.py --model_size "gpt2-xl (1558M)"--trainable_layers all`
- Row 10: `python additional_experiments.py --weights random --trainable_layers all`
- Row 11: `python additional_experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 16`
- Row 12: `python additional_experiments.py --trainable_layers lora --lora_rank 16 --lora_alpha 8 --model_size "gpt2-xl (1558M)"`
- Row 13: `python additional_experiments.py --context_length "model_context_length"`
- Row 14: `python additional_experiments.py --no_padding --batch_size 1`
- Row 15: `python additional_experiments.py --no_padding --batch_size 1 --accumulation_steps 8`
- Row 16: `python additional_experiments.py --disable_causal_mask`
- Row 17: `python additional_experiments.py --ignore_index 50256`

I've kept the LLM and dataset small on purpose, so you can run the training on a regular laptop like a MacBook Air M3 in about 15 minutes (for the default setting) in case you don't have access to a GPU.

Expand All @@ -60,7 +60,7 @@ I've kept the LLM and dataset small on purpose, so you can run the training on a
### Interpretation

1. **Training the Last vs. First Output Token Position (Row 1 vs. 2)**: Training the last output token position results in substantially better performance compared to the first. This improvement is expected due to the causal self-attention mask.
2. **Training the Last Transformer Block vs. Last Layer (Row 1 vs. 3)**: Training the entire last transformer block also results in substantially better results than training only the last layer.
2. **Training the Last Transformer Block vs. Last Layer (Row 1 vs. 3)**: Training the entire last transformer block is also results in substantially better results than training only the last layer.
3. **Training the Last vs. Last Two Last Transformer Blocks (Row 1 vs. 4)**: Training the two last transformer blocks instead of only the last block results in a noticeable 3.33% accuracy boost.
4. **Training Last Transformer Block vs All Layers (Row 1 vs. 5)**: Training all layers shows a modest improvement of ~2% over just training the last transformer block, but it requires almost three times longer in terms of training duration. Also, it does not perform as well as training only the last two out of 12 transformer blocks.
5. **Using Larger Pretrained Models (Row 1 vs 6, and Row 1 vs. 7 and 8)**: Employing a 3x larger pretrained model leads to worse results. However, using a 5x larger model improves performance compared to the initial model, as was anticipated. Similarly, the 12x larger model improves the predictive performance even further. (The medium model was perhaps not well pretrained or the particular finetuning configuration works not as well for this model.)
Expand Down
108 changes: 44 additions & 64 deletions ch06/03_bonus_imdb-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,111 +17,91 @@ The codes are using the 50k movie reviews from IMDb ([dataset source](https://ai
Run the following code to create the `train.csv`, `validation.csv`, and `test.csv` datasets:

```bash
python download-prepare-dataset.py
python download_prepare_dataset.py
```


 
## Step 3: Run Models

The 124M GPT-2 model used in the main chapter, starting for the pretrained weights and only training the last transformer block plus output layers:
The 124M GPT-2 model used in the main chapter, starting with pretrained weights, and finetuning all weights:

```bash
python train-gpt.py
python train_gpt.py --trainable_layers "all" --num_epochs 1
```

```
Ep 1 (Step 000000): Train loss 2.829, Val loss 3.433
Ep 1 (Step 000050): Train loss 1.440, Val loss 1.669
Ep 1 (Step 000100): Train loss 0.879, Val loss 1.037
Ep 1 (Step 000150): Train loss 0.838, Val loss 0.866
Ep 1 (Step 000000): Train loss 3.706, Val loss 3.853
Ep 1 (Step 000050): Train loss 0.682, Val loss 0.706
...
Ep 1 (Step 004300): Train loss 0.174, Val loss 0.202
Ep 1 (Step 004350): Train loss 0.309, Val loss 0.190
Training accuracy: 88.75% | Validation accuracy: 91.25%
Ep 2 (Step 004400): Train loss 0.263, Val loss 0.205
Ep 2 (Step 004450): Train loss 0.226, Val loss 0.188
...
Ep 2 (Step 008650): Train loss 0.189, Val loss 0.171
Ep 2 (Step 008700): Train loss 0.225, Val loss 0.179
Training accuracy: 85.00% | Validation accuracy: 90.62%
Ep 3 (Step 008750): Train loss 0.206, Val loss 0.187
Ep 3 (Step 008800): Train loss 0.198, Val loss 0.172
...
Training accuracy: 96.88% | Validation accuracy: 90.62%
Training completed in 18.62 minutes.
Ep 1 (Step 004300): Train loss 0.199, Val loss 0.285
Ep 1 (Step 004350): Train loss 0.188, Val loss 0.208
Training accuracy: 95.62% | Validation accuracy: 95.00%
Training completed in 9.48 minutes.
Evaluating on the full datasets ...
Training accuracy: 93.66%
Validation accuracy: 90.02%
Test accuracy: 89.96%
Training accuracy: 95.64%
Validation accuracy: 92.32%
Test accuracy: 91.88%
```


<br>

---

A 66M parameter encoder-style [DistilBERT](https://arxiv.org/abs/1910.01108) model (distilled down from a 340M parameter BERT model), starting for the pretrained weights and only training the last transformer block plus output layers:
<br>

A 340M parameter encoder-style [BERT](https://arxiv.org/abs/1810.04805) model:

```bash
python train-bert-hf.py
!python train_bert_hf --trainable_layers "all" --num_epochs 1 --model "bert"
```

```
Ep 1 (Step 000000): Train loss 0.693, Val loss 0.697
Ep 1 (Step 000050): Train loss 0.532, Val loss 0.596
Ep 1 (Step 000100): Train loss 0.431, Val loss 0.446
...
Ep 1 (Step 004300): Train loss 0.234, Val loss 0.351
Ep 1 (Step 004350): Train loss 0.190, Val loss 0.222
Training accuracy: 88.75% | Validation accuracy: 88.12%
Ep 2 (Step 004400): Train loss 0.258, Val loss 0.270
Ep 2 (Step 004450): Train loss 0.204, Val loss 0.295
Ep 1 (Step 000000): Train loss 0.848, Val loss 0.775
Ep 1 (Step 000050): Train loss 0.655, Val loss 0.682
...
Ep 2 (Step 008650): Train loss 0.088, Val loss 0.246
Ep 2 (Step 008700): Train loss 0.084, Val loss 0.247
Training accuracy: 98.75% | Validation accuracy: 90.62%
Ep 3 (Step 008750): Train loss 0.067, Val loss 0.209
Ep 3 (Step 008800): Train loss 0.059, Val loss 0.256
...
Ep 3 (Step 013050): Train loss 0.068, Val loss 0.280
Ep 3 (Step 013100): Train loss 0.064, Val loss 0.306
Training accuracy: 99.38% | Validation accuracy: 87.50%
Training completed in 16.70 minutes.
Ep 1 (Step 004300): Train loss 0.146, Val loss 0.318
Ep 1 (Step 004350): Train loss 0.204, Val loss 0.217
Training accuracy: 92.50% | Validation accuracy: 88.75%
Training completed in 7.65 minutes.
Evaluating on the full datasets ...
Training accuracy: 98.87%
Validation accuracy: 90.98%
Test accuracy: 90.81%
Training accuracy: 94.35%
Validation accuracy: 90.74%
Test accuracy: 90.89%
```

---
<br>

A 355M parameter encoder-style [RoBERTa](https://arxiv.org/abs/1907.11692) model, starting for the pretrained weights and only training the last transformer block plus output layers:
---

<br>

```bash
python train-bert-hf.py --bert_model roberta
```
A 66M parameter encoder-style [DistilBERT](https://arxiv.org/abs/1910.01108) model (distilled down from a 340M parameter BERT model), starting for the pretrained weights and only training the last transformer block plus output layers:

---

A scikit-learn Logistic Regression model as a baseline.

```bash
python train-sklearn-logreg.py
!python train_bert_hf.py --trainable_layers "all" --num_epochs 1 --model "distilbert"
```

```
Dummy classifier:
Training Accuracy: 50.01%
Validation Accuracy: 50.14%
Test Accuracy: 49.91%
Ep 1 (Step 000000): Train loss 0.693, Val loss 0.688
Ep 1 (Step 000050): Train loss 0.452, Val loss 0.460
...
Ep 1 (Step 004300): Train loss 0.179, Val loss 0.272
Ep 1 (Step 004350): Train loss 0.199, Val loss 0.182
Training accuracy: 95.62% | Validation accuracy: 91.25%
Training completed in 4.26 minutes.
Evaluating on the full datasets ...
Logistic regression classifier:
Training Accuracy: 99.80%
Validation Accuracy: 88.60%
Test Accuracy: 88.84%
Training accuracy: 95.30%
Validation accuracy: 91.12%
Test accuracy: 91.40%
```

Loading

0 comments on commit 5643620

Please sign in to comment.