From 2833ccfe75a87c416651423e2df16073673a1c27 Mon Sep 17 00:00:00 2001 From: "Ajay Gonepuri (GONAPCORP)" <98868227+HKABIG@users.noreply.github.com> Date: Sun, 29 Oct 2023 15:11:38 +0530 Subject: [PATCH] fix(docs): correct typos (#662) * docs: fix typos * Update CHANGELOG.md * Update README.md * Update README.md * Update index.md --- clients/vim/doc/tabby.txt | 2 +- website/blog/2023-09-30-stream-laziness-in-tabby/index.md | 2 +- website/blog/2023-10-14-seed-round-release-0-3-0-RAG.md | 2 +- website/blog/2023-10-21-incremental-decoding/index.md | 6 +++--- website/docs/installation/hugging-face/index.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/clients/vim/doc/tabby.txt b/clients/vim/doc/tabby.txt index f8638410520..adaee68dc33 100644 --- a/clients/vim/doc/tabby.txt +++ b/clients/vim/doc/tabby.txt @@ -78,6 +78,6 @@ Keybindings~ `` if no completion is shown. Trigger completion if not shown. Dismiss the current - compleiton if shown. + completion if shown. vim:tw=78:ts=8:noet:ft=help:norl: diff --git a/website/blog/2023-09-30-stream-laziness-in-tabby/index.md b/website/blog/2023-09-30-stream-laziness-in-tabby/index.md index b19b91fbf5f..bc70f5e5a7a 100644 --- a/website/blog/2023-09-30-stream-laziness-in-tabby/index.md +++ b/website/blog/2023-09-30-stream-laziness-in-tabby/index.md @@ -80,7 +80,7 @@ This is where the concept of stream laziness comes into play. We should perform ![Cancellation](./cancellation.png) -## How to handle canellation? +## How to handle cancellation? The core idea is straightforward: on the server side, we need to listen to the `close` event and check if the connection is still valid before pulling data from the LLM stream. diff --git a/website/blog/2023-10-14-seed-round-release-0-3-0-RAG.md b/website/blog/2023-10-14-seed-round-release-0-3-0-RAG.md index ebfceb71dc8..ecf54f497ee 100644 --- a/website/blog/2023-10-14-seed-round-release-0-3-0-RAG.md +++ b/website/blog/2023-10-14-seed-round-release-0-3-0-RAG.md @@ -15,7 +15,7 @@ Today, Tabby stands out as the most popular and user-friendly solution to enable ## Release v0.3.0 - Retrieval Augmented Code Completion 🎁 -Tabby also comes to a [v0.3.0 release](https://github.com/TabbyML/tabby/releases/tag/v0.3.0), with the support of retrieval-augmented code completion enabled by default. Enhanced by repo-level retrieval, Tabby gets smarter at your codebase and will quickly reference to a related funcion / code example from another file in your repository. +Tabby also comes to a [v0.3.0 release](https://github.com/TabbyML/tabby/releases/tag/v0.3.0), with the support of retrieval-augmented code completion enabled by default. Enhanced by repo-level retrieval, Tabby gets smarter at your codebase and will quickly reference to a related function / code example from another file in your repository. A blog series detailing the technical designs of retrieval-augmented code completion will be published soon. Stay tuned!🔔 diff --git a/website/blog/2023-10-21-incremental-decoding/index.md b/website/blog/2023-10-21-incremental-decoding/index.md index 088e57b755e..26e9dce37df 100644 --- a/website/blog/2023-10-21-incremental-decoding/index.md +++ b/website/blog/2023-10-21-incremental-decoding/index.md @@ -7,7 +7,7 @@ image: ./twitter-decoding.png --- # Decode the Decoding in Tabby -In the context of the Transformer model, which is widely used across LLMs, ***decoding*** refers to the process of generating an output sequence from an encoded input. Tabby recenty [implemented ***incremental decoding***](https://github.com/TabbyML/tabby/pull/491) as part of the greedy search. This blog will explain our thoughts behind this 🛠️💡. +In the context of the Transformer model, which is widely used across LLMs, ***decoding*** refers to the process of generating an output sequence from an encoded input. Tabby recently [implemented ***incremental decoding***](https://github.com/TabbyML/tabby/pull/491) as part of the greedy search. This blog will explain our thoughts behind this 🛠️💡. ## Common Decoding Methods @@ -21,7 +21,7 @@ numbers = [1, 2, 3, 4, 5] evens = [x for x in numbers ``` -To simplify the scenario, we assume that the language model maintains a probaility distribution as shown below, +To simplify the scenario, we assume that the language model maintains a probability distribution as shown below, ![probability](./probability.png) @@ -67,6 +67,6 @@ In the case above, the final decoded string would be `" he llo"` with an awkward Incremental decoding: ......, 207, 211 -> "......[ hello]" ✅ ``` -For interested folks, you can refer to Tabby's exact implementation in `IncrementalDecoding` funcion in [`creates/tabby-inference/src/decoding.rs`](https://github.com/TabbyML/tabby/pull/491). +For interested folks, you can refer to Tabby's exact implementation in `IncrementalDecoding` function in [`creates/tabby-inference/src/decoding.rs`](https://github.com/TabbyML/tabby/pull/491). Have you found our new decoding methods effective? Share your thoughts with us in our [Slack](https://join.slack.com/t/tabbyml/shared_invite/zt-22thejc0z-7ePKeWNCHPX31pEtnT4oYQ) channel 🌍😊! diff --git a/website/docs/installation/hugging-face/index.md b/website/docs/installation/hugging-face/index.md index e445f3b1659..702c9f2f543 100644 --- a/website/docs/installation/hugging-face/index.md +++ b/website/docs/installation/hugging-face/index.md @@ -11,7 +11,7 @@ This tutorial is now also available on [Hugging Face](https://huggingface.co/doc ## Your first Tabby Space -In this section, you will learn how to deploy a Tabby Space and use it for yourself or your orgnization. +In this section, you will learn how to deploy a Tabby Space and use it for yourself or your organization. ### Deploy Tabby on Spaces