diff --git a/README.md b/README.md index 182c80a1..23e80c95 100644 --- a/README.md +++ b/README.md @@ -2,16 +2,14 @@ LLMs will continue to change the way we build software systems. They are not only useful as coding assistants, providing snipets of code, explanations, and code transformations, but they can also help replace components that could only previously be achieved with rule-based systems. Whether LLMs are used as coding assistants or software components, reliability remains an important concern. LLMs have a textual interface and the structure of useful prompts is not captured formally. Programming frameworks do not enforce or validate such structures since they are not specified in a machine-consumable way. The purpose of the Prompt Declaration Language (PDL) is to allow developers to specify the structure of prompts and to enforce it, while providing a unified programming framework for composing LLMs with rule-based systems. -PDL is based on the premise that interactions between users, LLMs and rule-based systems form a *document*. Consider for example the interactions between a user and a chatbot. At each interaction, the exchanges form a document that gets longer and longer. Similarly, chaining models together or using tools for specific tasks result in outputs that together form a document. PDL allows users to specify the shape and contents of such documents in a declarative way (in YAML or JSON), and is agnostic of any programming language. Because of its document-oriented nature, it can be used to easily express a variety of data generation tasks (inference, data synthesis, data generation for model training, etc...). Moreover, PDL programs themselves are structured data (YAML) as opposed to traditional code, so they make good targets for LLM generation as well. - +PDL is based on the premise that interactions between users, LLMs and rule-based systems form a *document*. Consider for example the interactions between a user and a chatbot. At each interaction, the exchanges form a document that gets longer and longer. Similarly, chaining models together or using tools for specific tasks result in outputs that together form a document. PDL allows users to specify the shape and contents of such documents in a declarative way (in YAML), and is agnostic of any programming language. Because of its document-oriented nature, it can be used to easily express a variety of data generation tasks (inference, data synthesis, data generation for model training, etc...). PDL provides the following features: - Ability to use any LLM locally or remotely via [LiteLLM](https://www.litellm.ai/), including [IBM's Watsonx](https://www.ibm.com/watsonx) -- Ability to templatize not only prompts for one LLM call, but also composition of LLMs with tools (code and APIs). Templates can encompass tasks of larger granularity than a single LLM call (unlike many prompt programming languages) +- Ability to templatize not only prompts for one LLM call, but also composition of LLMs with tools (code and APIs). Templates can encompass tasks of larger granularity than a single LLM call - Control structures: variable definitions and use, conditionals, loops, functions -- Ability to read from files, including JSON data -- Ability to call out to code. At the moment only Python is supported, but this could be any other programming language in principle -- Ability to call out to REST APIs with Python code +- Ability to read from files and stdin, including JSON data +- Ability to call out to code and call REST APIs (Python) - Type checking input and output of model calls - Python SDK - Support for chat APIs and chat templates @@ -24,21 +22,21 @@ See below for installation notes, followed by an [overview](#overview) of the la ## Interpreter Installation -The interpreter has been tested with Python version **3.12**. +The interpreter has been tested with Python version **3.11 and 3.12**. To install the requirements for `pdl`, execute the command: ``` -pip3 install prompt-declaration-language +pip install prompt-declaration-language ``` To install the dependencies for development of PDL and execute all the example, execute the command: ``` -pip3 install 'prompt-declaration-language[all]' +pip install 'prompt-declaration-language[dev]' +pip install 'prompt-declaration-language[examples]' +pip install 'prompt-declaration-language[docs]' ``` - - In order to run the examples that use foundation models hosted on [Watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a WatsonX account (a free plan is available) and set up the following environment variables: - `WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your WatsonX instance - `WATSONX_APIKEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key)) @@ -49,12 +47,28 @@ For more information, see [documentation](https://docs.litellm.ai/docs/providers To run the interpreter: ``` -pdl +pdl ``` The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns such as CoT, RAG, ReAct, and tool use. -We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using this settings and schema files. The PDL interpreter also provides similar error messages. +We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using the following `./vscode/settings.json` file: + +``` +{ + "yaml.schemas": { + "https://ibm.github.io/prompt-declaration-language/dist/pdl-schema.json": "*.pdl" + }, + "files.associations": { + "*.pdl": "yaml", + } +} +``` + +The interpreter executes Python code specified in PDL code blocks. To sandbox the interpreter for safe execution, +you can use the `--sandbox` flag which runs the interpreter in a docker container. Without this flag, the interpreter +and all code is executed locally. To use the `--sandbox` flag, you need to have a docker daemon running, such as +[Rancher Desktop](https://rancherdesktop.io). The interpreter prints out a log by default in the file `log.txt`. This log contains the details of inputs and outputs to every block in the program. It is useful to examine this file when the program is behaving differently than expected. The log displays the exact prompts submitted to models by LiteLLM (after applying chat templates), which can be useful for debugging. @@ -233,7 +247,7 @@ The function `deserializeOffsetMap` takes a string as input and returns a map. I The `@SuppressWarnings("unchecked")` annotation is used to suppress the warning that the type of the parsed map is not checked. This is because the Jackson library is used to parse the input string into a map, but the specific type of the map is not known at compile time. Therefore, the warning is suppressed to avoid potential issues. ``` -Notice that in PDL variables are used to templatize any entity in the document, not just textual prompts to LLMs. We can add a block to this document to evaluate the quality of the output using a similarity metric with respect to our [ground truth](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/ground_truth.txt). See [file](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/code-eval.yaml): +Notice that in PDL variables are used to templatize any entity in the document, not just textual prompts to LLMs. We can add a block to this document to evaluate the quality of the output using a similarity metric with respect to our [ground truth](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/ground_truth.txt). See [file](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/code-eval.pdl): ```yaml description: Code explanation example @@ -368,7 +382,7 @@ PDL has a Live Document visualizer to help in program understanding given an exe To produce an execution trace consumable by the Live Document, you can run the interpreter with the `--trace` argument: ``` -pdl --trace +pdl --trace ``` This produces an additional file named `my-example_trace.json` that can be uploaded to the [Live Document](https://ibm.github.io/prompt-declaration-language/viewer/) visualizer tool. Clicking on different parts of the Live Document will show the PDL code that produced that part @@ -379,7 +393,7 @@ This is similar to a spreadsheet for tabular data, where data is in the forefron ## Additional Notes -When using Granite models on Watsonx, we use the following defaults for model parameters: +When using Granite models on Watsonx, we use the following defaults for model parameters (except `granite-20b-code-instruct-r1.1`): - `decoding_method`: `greedy` - `max_new_tokens`: 1024 - `min_new_tokens`: 1 diff --git a/docs/README.md b/docs/README.md index f52fa689..7bd69f6c 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,16 +7,14 @@ hide: LLMs will continue to change the way we build software systems. They are not only useful as coding assistants, providing snipets of code, explanations, and code transformations, but they can also help replace components that could only previously be achieved with rule-based systems. Whether LLMs are used as coding assistants or software components, reliability remains an important concern. LLMs have a textual interface and the structure of useful prompts is not captured formally. Programming frameworks do not enforce or validate such structures since they are not specified in a machine-consumable way. The purpose of the Prompt Declaration Language (PDL) is to allow developers to specify the structure of prompts and to enforce it, while providing a unified programming framework for composing LLMs with rule-based systems. -PDL is based on the premise that interactions between users, LLMs and rule-based systems form a *document*. Consider for example the interactions between a user and a chatbot. At each interaction, the exchanges form a document that gets longer and longer. Similarly, chaining models together or using tools for specific tasks result in outputs that together form a document. PDL allows users to specify the shape and contents of such documents in a declarative way (in YAML or JSON), and is agnostic of any programming language. Because of its document-oriented nature, it can be used to easily express a variety of data generation tasks (inference, data synthesis, data generation for model training, etc...). Moreover, PDL programs themselves are structured data (YAML) as opposed to traditional code, so they make good targets for LLM generation as well. - +PDL is based on the premise that interactions between users, LLMs and rule-based systems form a *document*. Consider for example the interactions between a user and a chatbot. At each interaction, the exchanges form a document that gets longer and longer. Similarly, chaining models together or using tools for specific tasks result in outputs that together form a document. PDL allows users to specify the shape and contents of such documents in a declarative way (in YAML), and is agnostic of any programming language. Because of its document-oriented nature, it can be used to easily express a variety of data generation tasks (inference, data synthesis, data generation for model training, etc...). PDL provides the following features: - Ability to use any LLM locally or remotely via [LiteLLM](https://www.litellm.ai/), including [IBM's Watsonx](https://www.ibm.com/watsonx) -- Ability to templatize not only prompts for one LLM call, but also composition of LLMs with tools (code and APIs). Templates can encompass tasks of larger granularity than a single LLM call (unlike many prompt programming languages) +- Ability to templatize not only prompts for one LLM call, but also composition of LLMs with tools (code and APIs). Templates can encompass tasks of larger granularity than a single LLM call - Control structures: variable definitions and use, conditionals, loops, functions -- Ability to read from files, including JSON data -- Ability to call out to code. At the moment only Python is supported, but this could be any other programming language in principle -- Ability to call out to REST APIs with Python code +- Ability to read from files and stdin, including JSON data +- Ability to call out to code and call REST APIs (Python) - Type checking input and output of model calls - Python SDK - Support for chat APIs and chat templates @@ -29,21 +27,21 @@ See below for installation notes, followed by an [overview](#overview) of the la ## Interpreter Installation -The interpreter has been tested with Python version **3.12**. +The interpreter has been tested with Python version **3.11 and 3.12**. To install the requirements for `pdl`, execute the command: ``` -pip3 install prompt-declaration-language +pip install prompt-declaration-language ``` To install the dependencies for development of PDL and execute all the example, execute the command: ``` -pip3 install 'prompt-declaration-language[all]' +pip install 'prompt-declaration-language[dev]' +pip install 'prompt-declaration-language[examples]' +pip install 'prompt-declaration-language[docs]' ``` - - In order to run the examples that use foundation models hosted on [Watsonx](https://www.ibm.com/watsonx) via LiteLLM, you need a WatsonX account (a free plan is available) and set up the following environment variables: - `WATSONX_URL`, the API url (set to `https://{region}.ml.cloud.ibm.com`) of your WatsonX instance - `WATSONX_APIKEY`, the API key (see information on [key creation](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key)) @@ -54,12 +52,28 @@ For more information, see [documentation](https://docs.litellm.ai/docs/providers To run the interpreter: ``` -pdl +pdl ``` The folder `examples` contains many examples of PDL programs. Several of these examples have been adapted from the LMQL [paper](https://arxiv.org/abs/2212.06094) by Beurer-Kellner et al. The examples cover a variety of prompting patterns such as CoT, RAG, ReAct, and tool use. -We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using this settings and schema files. The PDL interpreter also provides similar error messages. +We highly recommend using VSCode to edit PDL YAML files. This project has been configured so that every YAML file is associated with the PDL grammar JSONSchema (see [settings](https://github.com/IBM/prompt-declaration-language/blob/main/.vscode/settings.json) and [schema](https://github.com/IBM/prompt-declaration-language/blob/main/pdl-schema.json)). This enables the editor to display error messages when the yaml deviates from the PDL syntax and grammar. It also provides code completion. You can set up your own VSCode PDL projects similarly using the following `./vscode/settings.json` file: + +``` +{ + "yaml.schemas": { + "https://ibm.github.io/prompt-declaration-language/dist/pdl-schema.json": "*.pdl" + }, + "files.associations": { + "*.pdl": "yaml", + } +} +``` + +The interpreter executes Python code specified in PDL code blocks. To sandbox the interpreter for safe execution, +you can use the `--sandbox` flag which runs the interpreter in a docker container. Without this flag, the interpreter +and all code is executed locally. To use the `--sandbox` flag, you need to have a docker daemon running, such as +[Rancher Desktop](https://rancherdesktop.io). The interpreter prints out a log by default in the file `log.txt`. This log contains the details of inputs and outputs to every block in the program. It is useful to examine this file when the program is behaving differently than expected. The log displays the exact prompts submitted to models by LiteLLM (after applying chat templates), which can be useful for debugging. @@ -238,7 +252,7 @@ The function `deserializeOffsetMap` takes a string as input and returns a map. I The `@SuppressWarnings("unchecked")` annotation is used to suppress the warning that the type of the parsed map is not checked. This is because the Jackson library is used to parse the input string into a map, but the specific type of the map is not known at compile time. Therefore, the warning is suppressed to avoid potential issues. ``` -Notice that in PDL variables are used to templatize any entity in the document, not just textual prompts to LLMs. We can add a block to this document to evaluate the quality of the output using a similarity metric with respect to our [ground truth](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/ground_truth.txt). See [file](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/code-eval.yaml): +Notice that in PDL variables are used to templatize any entity in the document, not just textual prompts to LLMs. We can add a block to this document to evaluate the quality of the output using a similarity metric with respect to our [ground truth](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/ground_truth.txt). See [file](https://github.com/IBM/prompt-declaration-language/blob/main/examples/code/code-eval.pdl): ```yaml description: Code explanation example @@ -373,7 +387,7 @@ PDL has a Live Document visualizer to help in program understanding given an exe To produce an execution trace consumable by the Live Document, you can run the interpreter with the `--trace` argument: ``` -pdl --trace +pdl --trace ``` This produces an additional file named `my-example_trace.json` that can be uploaded to the [Live Document](https://ibm.github.io/prompt-declaration-language/viewer/) visualizer tool. Clicking on different parts of the Live Document will show the PDL code that produced that part @@ -384,7 +398,7 @@ This is similar to a spreadsheet for tabular data, where data is in the forefron ## Additional Notes -When using Granite models on Watsonx, we use the following defaults for model parameters: +When using Granite models on Watsonx, we use the following defaults for model parameters (except `granite-20b-code-instruct-r1.1`): - `decoding_method`: `greedy` - `max_new_tokens`: 1024 - `min_new_tokens`: 1