diff --git a/README.md b/README.md index 044077098..b9f7e32e5 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@
> _Where there is no guidance, a model fails, but in an abundance of instructions there is safety._ -_\- GPT 11:14_ +_\- GPT 11:14_ Guidance is a language that enables you to control modern language models more easily and efficiently. Guidance programs allow you to interleave generation, prompting, and logical control into a single continuious flow that matches how the language model actually processes the text. Simple output structures like [Chain of Thought](https://arxiv.org/abs/2201.11903) and its many variants (e.g. [ART](https://arxiv.org/abs/2303.09014), [Auto-CoT](https://arxiv.org/abs/2210.03493), etc.) have been shown to improve LLM performance. The advent of more powerful LLMs like [GPT-4](https://arxiv.org/abs/2303.12712) allows for even richer structure, and `guidance` makes that structure easier and cheaper. @@ -19,49 +19,11 @@ Features: - [x] Support for role-based chat models (e.g. [ChatGPT](https://beta.openai.com/docs/guides/chat)). - [x] Easy integration with HuggingFace models, including [guidance acceleration](notebooks/guidance_acceleration.ipynb) for speedups over standard prompting, [token healing](notebooks/token_healing.ipynb) to optimize prompt boundaries, and [regex pattern guides](notebooks/pattern_guides.ipynb) to enforce formats. -# Install +## Install ```python pip install guidance ``` - - - ## Rich output structure example ([notebook](notebooks/anachronism.ipynb)) @@ -208,9 +170,9 @@ out = create_plan( This prompt/program is a bit more complicated, but we are basically going through 3 steps: 1. Generate a few options for how to accomplish the goal. Note that we generate with `n=5`, such that each option is a separate generation (and is not impacted by the other options). We set `temperature=1` to encourage diversity. 2. Generate pros and cons for each option, and select the best one. We set `temperature=0` to encourage the model to be more precise. -3. Generate a plan for the best option, and ask the model to elaborate on it. Notice that steps 1 and 2 were `hidden`, and thus GPT-4 does not see them. This is a simple way to make the model focus on the current step. +3. Generate a plan for the best option, and ask the model to elaborate on it. Notice that steps 1 and 2 were `hidden`, which means GPT-4 does not see them when generating content that comes later (in this case that means when generating the plan). This is a simple way to make the model focus on the current step. -Since steps 1 and 2 are hidden, they do not appear on the generated output, but we can print them: +Since steps 1 and 2 are hidden, they do not appear on the generated output (except briefly during stream), but we can print the variables that these steps generated: ```python print('\n'.join(['Option %d: %s' % (i, x) for i, x in enumerate(out['options'])])) ```