Skip to content

Commit

Permalink
docs(typescript/quickstart): first draft of classification example
Browse files Browse the repository at this point in the history
  • Loading branch information
Jack Hopkins committed Feb 14, 2024
1 parent 948baa9 commit c0468b1
Show file tree
Hide file tree
Showing 4 changed files with 186 additions and 124 deletions.
2 changes: 1 addition & 1 deletion fern/docs/pages/python/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Here you'll find information to get started quickly using Tanuki.
---


Here you'll find information to get started with Tanuki.py as well as our API Reference.
Here you'll learn how to get started with Tanuki.py as well as our API Reference.

## Setup

Expand Down
2 changes: 1 addition & 1 deletion fern/docs/pages/sdks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ We provide official open-source SDKs (client libraries) for Typescript and Pytho

We regularly update our SDKs, and adhere to [semantic versioning](https://semver.org/) (semver) principles, so we won't make breaking changes in minor or patch releases.

Both SDKs are implemented in lock-step, so both clients have the same features and functionality in with the same major and minor version numbers.
Both SDKs are implemented in lock-step, so both clients have the same features and functionality with the same major and minor version numbers.

## Official SDKs

Expand Down
217 changes: 158 additions & 59 deletions fern/docs/pages/typescript/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,71 +3,152 @@ title: Quickstart
description: Here you'll find information to get started quickly using Tanuki.
---

<Callout intent="success">
Integrate LLM into your workflow in under 5 minutes.
</Callout>

## Quickstart

Welcome to our documentation! Here you'll find information to get started as well as our API Reference.
Here you'll learn how to get started with Tanuki.ts with a basic classification example.

## Setup

To get started:
1. Install the python or Typescript package
<CodeBlocks>
<CodeBlock title="Typescript">
```bash
npm install tanuki.ts
```
</CodeBlock>
<CodeBlock title="Python">
```bash
pip3 install tanuki.py
```
</CodeBlock>
</CodeBlocks>
2. Add the API and authentication keys. To set the API key for the default OpenAI models
### Install the package
<CodeBlock title="Install">
```bash
npm install tanuki.ts
npm install ts-patch --save-dev
npx ts-patch install
```
</CodeBlock>

### Authenticate with your model provider

Add the API and authentication keys by setting environment variables.

You can do this by adding the following to your `.bashrc` or `.zshrc` file, or by running the following commands in your terminal.

<CodeBlocks>
<CodeBlock title="Openai setup">
```bash
export OPENAI_API_KEY=sk-...
```
</CodeBlock>
<CodeBlock title="Amazon Bedrock">
```bash
export BEDROCK_API_KEY=sk-...
```
</CodeBlock>
<CodeBlock title="Together AI">
```bash
export TOGETHER_API_KEY=sk-...
```
</CodeBlock>
<CodeBlock title="Anyscale">
```bash
export ANYSCALE_API_KEY=sk-...
```
</CodeBlock>
</CodeBlocks>

## Create the function
3. Create a python function stub decorated with `@tanuki.patch` including type hints and a docstring.
4. (Optional) Create another function decorated with `@tanuki.align` containing normal `assert` statements declaring the expected behaviour of your patched function with different inputs. When executing the function, the function annotated with `align` must also be called
5. (Optional) Configure the model you want to use the function for. By default GPT-4 is used but if you want to use any other models supported in our stack, then configure them in the `@tanuki.patch` operator. You can find out exactly how to configure OpenAI, Amazon Bedrock and Together AI models in the [models](placeholder_url) section.
The patched function can now be called as normal in the rest of your code.
## Add Tanuki to your build system

## Default

Here is what the whole script for a a simple classification function would look like:
Next, we need to add the Tanuki type transformer as a plugin to your `tsconfig.json` file.

<CodeBlocks>
<CodeBlock title="Python">
```bash
import tanuki
from typing import Optional
@tanuki.patch
def classify_sentiment(msg: str) -> Optional[Literal['Good', 'Bad']]:
"""Classifies a message from the user into Good, Bad or None."""

@tanuki.align
def align_classify_sentiment():
assert classify_sentiment("I love you") == 'Good'
assert classify_sentiment("I hate you") == 'Bad'
assert not classify_sentiment("People from Phoenix are called Phoenicians")

align_classify_sentiment()
print(classify_sentiment("I like you")) # Good
print(classify_sentiment("Apples might be red")) # None
```
</CodeBlock>
<CodeBlock title="Typescript">
```bash
This will allow Tanuki to be aware of your patched functions and types at runtime, as these types are erased when transpiling into Javascript.

```json
{
"compilerOptions": {
"plugins": [
{
"transform": "tanuki.ts/tanukiTransformer"
}
]
}
}
```

### Next.js Setup

As Next.js has its own internal build system, you must explicitly add the transformer to your `package.json` file instead by adding the following scripts.

```json
{
"scripts": {
"predev": "tanuki-type-compiler",
"prebuild": "tanuki-type-compiler",
"prestart": "tanuki-type-compiler"
}
}
```

## Create a function

Create a Typescript class, and inside it create a static method decorated with `patch` containing type hints and a docstring.

<CodeBlock title="Patch">
```typescript
import { patch, Tanuki } from "../../lib/tanuki";

class SentimentClassifier {
static classifySentiment = patch< "Good" | "Bad", string>()
`Classify input objects`;
}
```
</CodeBlock>

## Align the function (Optional)

Create a block using `Tanuki.align` containing Jest-like `expect` statements to calibrate the expected behaviour of your patched function with different inputs.

<Callout intent="info">
See the [align](placeholder_url) section for more information on how to use the `align` operator.
</Callout>

<CodeBlock title="Align">
```typescript
import { Tanuki } from "../../lib/tanuki";

Tanuki.align(async (it) => {
it("declares how classifySentiment should behave", async (expect) => {
await expect(ClassifierSentiment.classifySentiment("I love you")).toEqual('Good');
await expect(ClassifierSentiment.classifySentiment("I hate you")).toEqual('Bad');
await expect(ClassifierSentiment.classifySentiment("People from Phoenix are called Phoenicians")).toBeNull();
})
})
```
</CodeBlock>

## Configure the Tanuki runtime

By default, Tanuki uses GPT-4 as the teacher model.

You can configure your Tanuki runtime either by:
- passing in the desired parameters to the `patch<>` operator.
- globally setting Tanuki environment variables.

<Callout intent="info">
See the [configuration](placeholder_url) API reference for more configuration options.
</Callout>

Here is an example of how to configure the Tanuki runtime to use the Llama model from Bedrock as the teacher model.

<CodeBlock title="Typescript">
```typescript
class ClassifierSentiment {
static classifySentiment = patch< "Good" | "Bad", string>({
teacherModels: ["llama_70b_chat_aws"],
generationParams: {
"max_new_tokens": 10,
}
})`Classifies a message from the user into Good, Bad or null.`;
}
```
</CodeBlock>

## Putting it all togther

<CodeBlock title="Typescript">
```typescript
class ClassifierSentiment {
static classifySentiment = patch< "Good" | "Bad", string>()
`Classifies a message from the user into Good, Bad or null.`;
Expand All @@ -82,15 +163,33 @@ Here is what the whole script for a a simple classification function would look
})

console.log(await ClassifierSentiment.classifySentiment("I like you")) // Good
console.log(await ClassifierSentiment.classifySentiment("Apples might be red")) // Null
console.log(await ClassifierSentiment.classifySentiment("Apples might be red")) // null
```
</CodeBlock>
</CodeBlocks>
## Next steps
If you want to find out how to create more complex functions and aligns, check out the [Functions](placeholder_url) or the [Aligns](placeholder_url) section.
If you want to find out how to use different teacher and student models, checkout out the [Models](placeholder_url) section.
If you want to find out how Tanuki distills larger models down to smaller models, check out the [Distillation](placeholder_url) section.
If you want to see a whole array of example functions with Tanuki, check out the [Examples](placeholder_url) section.
</CodeBlock>


## Next steps
<br />
<Cards>
<Card
title="Learn how to create more complex functions and aligns"
icon="fa-book"
href="/api-reference/python"
/>
<Card
title="Learn how to use different teacher and student models"
icon="fa-graduation-cap"
href="/api-reference/python"
/>
<Card
title="Learn how Tanuki distills larger models down to smaller models"
icon="fa-flask"
href="/api-reference/python"
/>
<Card
title="See examples of what else you can do with Tanuki"
icon="fa-lightbulb"
href="/api-reference/python"
/>
</Cards>
<br />
89 changes: 26 additions & 63 deletions fern/docs/pages/welcome.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,33 @@ description: Here you'll find information to get started, as well as a sample us
{/* Start building beautiful documentation in under 5 minutes. */}
{/* </Callout> */}

Tanuki is a framework for developing powerful, fast and reliable LLM applications.
Tanuki is a framework for developing powerful LLM applications without prompt engineering.

## Featuring
- Integrates in less than 5 minutes
- Reduces LLM cost and latency by up to 90% through automatic distillation
- Declarative API for aligning LLM behavior to your specifications
- Typed and validated input and outputs
- Support for leading LLM providers such as OpenAI, Bedrock, and more
- SDKs in Python and Typescript
## Features
- Integrate in less than 5 minutes
- Optimize cost and latency by up to 90% through automatic distillation
- Align LLM behavior to your specifications declaratively
- Strongly Typed and validated input and outputs
- Supports leading LLM providers such as OpenAI, Bedrock, and more
- Open-source SDKs in Python and Typescript

## Usecases
## Principles

Tanuki is
Tanuki is designed to help you build LLM applications using software engineering best practices.

It is designed with simplicity in mind,
It is built on the following principles:
- **Ergonomic**: Use idiomatic patterns and best practices in your language of choice for cleaner and more maintainable code.
- **Logic as Code**: Define your LLM features as typed functions, rather than prompt engineering.
- **Test-Driven**: Declare the behavior of your LLM functions with unit-tests.
- **Convention over Configuration**: Tanuki is designed to work out of the box with sensible defaults, but is also highly configurable.
- **Open Source**: Tanuki is open-source and community-driven.

## When to use Tanuki

- You are building features that require reasoning, understanding, or decision-making over complex or unstructured data.
- You need lower latency without sacrificing accuracy or cost.
- You want to align LLM behavior to your specifications declaratively.
- You want to use outputs from an LLM in the rest of your application.


Welcome to our documentation! Here you'll find information to get started as well as our API Reference.
Expand Down Expand Up @@ -56,61 +68,12 @@ Welcome to our documentation! Here you'll find information to get started as wel
/>
</Cards>

## Getting started

To get started, customize the links, content, and theme to match your brand!

## SDKs

In addition to documentation, Fern can generate SDKs (client libraries) in popular programming languages. That way, you can offer users an experience such as:

<CodeBlocks>
<CodeBlock title="Node">
```bash
npm install your-organization
# or
yarn add your-organization
```
</CodeBlock>
<CodeBlock title="Python">
```bash
pip install your-organization
```
</CodeBlock>
<CodeBlock title="Go">
```bash
go get -u github.com/your-organization/go
```
</CodeBlock>
<CodeBlock title="Java">
```bash
<dependency>
<groupId>com.your-organization</groupId>
<artifactId>your-organization</artifactId>
<version>2.0.0</version>
</dependency>
# or
implementation("com.your-organization.java:sdk:2.0.0")
```
</CodeBlock>
<CodeBlock title="Ruby">
```bash
gem install your-organization
```
</CodeBlock>
<CodeBlock title="C#">
```bash
nuget install your-organization.net
```
</CodeBlock>
</CodeBlocks>

## Need help?

We have three channels to assist:

1. Join the [Discord](https://discord.com/invite/JkkXumPzcG) where you can get help from Fern developers and community members.
1. Join the [Discord](https://discord.com/invite/JkkXumPzcG) where you can get help from Tanuki developers and community members.

2. Open an issue on [GitHub](https://github.com/fern-api/docs-starter/issues/new) and we'll get back to you promptly.
2. Open a Github issue on [Tanuki.ts](https://github.com/Tanuki/tanuki.ts/issues/new) or [Tanuki.py](https://github.com/Tanuki/tanuki.py/issues/new) and we'll get back to you promptly.

3. Email us at [support@buildwithfern.com](mailto:support@buildwithfern.com) and we'll do our best to reply within 48 hours.
3. Email us at [support@tanuki.land](mailto:support@tanuki.land) and we'll do our best to reply within 48 hours.

0 comments on commit c0468b1

Please sign in to comment.