From 1180273cb23e7e51d64f8fc208c4ccc7d3b3a987 Mon Sep 17 00:00:00 2001 From: lccurious Date: Sat, 24 Aug 2024 21:07:54 +0800 Subject: [PATCH] fix bm --- _posts/2024-02-25-Extend-LLMs-Context-Window.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-02-25-Extend-LLMs-Context-Window.md b/_posts/2024-02-25-Extend-LLMs-Context-Window.md index 66c09cd..dbc9724 100644 --- a/_posts/2024-02-25-Extend-LLMs-Context-Window.md +++ b/_posts/2024-02-25-Extend-LLMs-Context-Window.md @@ -214,7 +214,7 @@ A explanation can be: > Consequently, the model tends to dump unnecessary attention values to specific tokens. -> 📌 Extensive research has been done on applying LLMs to lengthy texts, with three main areas of focus: **Length Extrapolation, Context Window Extension, **and **Improving LLMs’s Utilization of Long Text.** While seemingly related, it’s worth nothing that progress in one direction does’t necessarily lead to progress in the other. +> 📌 Extensive research has been done on applying LLMs to lengthy texts, with three main areas of focus: **Length Extrapolation, Context Window Extension,** and **Improving LLMs’s Utilization of Long Text.** While seemingly related, it’s worth nothing that progress in one direction does’t necessarily lead to progress in the other. > This paper does not expand the attention window size of LLMs or enhance the model’s memory and usage on long texts. {: .block-tip }