Skip to content

Commit

Permalink
Merge branch 'main' into reformat-code-samples-2
Browse files Browse the repository at this point in the history
  • Loading branch information
trentfowlercohere authored Jan 23, 2025
2 parents 0ccdd93 + 60d68b8 commit 6975ea7
Show file tree
Hide file tree
Showing 17 changed files with 704 additions and 457 deletions.
16 changes: 8 additions & 8 deletions fern/pages/text-generation/structured-outputs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ updatedAt: "Tue Jun 11 2024 02:43:00 GMT+0000 (Coordinated Universal Time)"

## Overview

Structured Outputs is a feature for forcing the LLM's output to follow a specified format 100% of the time. This increases the reliability of LLM in enterprise applications where downstream applications expect the LLM output to be correctly formatted. By forcing the model to follow a structured schema, hallucinated fields and entries in structured data can be reliably eliminated.
Structured Outputs is a feature that forces the LLM’s response to strictly follow a schema specified by the user. When Structured Outputs is turned on, the LLM will generate structured data that follows the desired schema, provided by the user, 100% of the time. This increases the reliability of LLMs in enterprise applications where downstream applications expect the LLM output to be correctly formatted. With Structured Outputs, hallucinated fields and entries in structured data can be reliably eliminated.

Compatible models:
- Command R 08 2024
Expand Down Expand Up @@ -75,8 +75,8 @@ By setting the `response_format` type to `"json_object"` in the Chat API, the ou
When using `{ "type": "json_object" }` your `message` should always explicitly instruct the model to generate a JSON (eg: _"Generate a JSON ..."_) . Otherwise the model may end up getting stuck generating an infinite stream of characters and eventually run out of context length.
</Info>

<Note title="Experimental">
This feature is currently not supported in RAG mode.
<Note title="Note">
This feature is currently not supported in [RAG](https://docs.cohere.com/v1/docs/retrieval-augmented-generation-rag) mode.
</Note>

#### JSON Schema mode
Expand Down Expand Up @@ -125,15 +125,11 @@ In this schema, we defined three keys ("title," "author," "publication_year") an
```

<Info title="Important">
Note: Each schema provided will incur a latency overhead required for processing the schema. This is only applicable for the first few requests.
</Info>

## Specifying a schema

### Generating nested objects

The model can be configured to output objects with up to 5 levels of nesting. When a `schema` is specified, there are no limitations on the levels of nesting.
In JSON Schema mode, there are no limitations on the levels of nesting. However, in JSON mode (no schema specified), nesting is limited to 5 levels.

### Schema constraints

Expand Down Expand Up @@ -164,3 +160,7 @@ We do not support the entirety of the [JSON Schema specification](https://json-s
- Others:
- `uniqueItems`
- `additionalProperties`

<Info title="Important">
Note: Using Structured Outputs (in JSON Schema mode) will incur a latency overhead required for processing the structured schema. This increase in latency only applies for the first few requests, since the schema is cached afterwards.
</Info>
12 changes: 0 additions & 12 deletions fern/pages/text-generation/tools/tool-use.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -323,18 +323,6 @@ outputs = [{"number": 2343}] # Not Great
outputs = [{"sum": 2343}] # Better
```

## What's Next?

Here, we'll preview some of the functionality we plan on adding in the coming months.

### Cohere-hosted Tools

The model can currently handle any tool provided by the developer. That having been said, Cohere has implemented some pre-defined tools that users can leverage out-of-the-box.

Specifically we're going to roll out a **Python interpreter** tool and a **Web search** tool.

Please [reach out](mailto:[email protected]) to join the beta.

## Getting started

Check out [this notebook](https://github.com/cohere-ai/cohere-developer-experience/blob/main/notebooks/agents/Vanilla_Tool_Use.ipynb) for a worked-out examples.
Expand Down
7 changes: 4 additions & 3 deletions fern/pages/tutorials/build-things-with-cohere.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,8 @@ Next, we'll import the `cohere` library and create a client to be used throughou
```python PYTHON
import cohere

co = cohere.Client(api_key="YOUR_COHERE_API_KEY") # Get your API key here: https://dashboard.cohere.com/api-keys
# Get your API key here: https://dashboard.cohere.com/api-keys
co = cohere.Client(api_key="YOUR_COHERE_API_KEY")
```

# Accessing Cohere from Other Platforms
Expand Down Expand Up @@ -93,8 +94,8 @@ For further information, read this documentation on [Cohere on Azure](/docs/cohe
import cohere

co = cohere.Client(
api_key="...",
base_url="...",
api_key="...",
base_url="...",
)
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,8 @@ To get started, first we need to install the `cohere` library and create a Coher

import cohere

co = cohere.Client("COHERE_API_KEY") # Get your API key: https://dashboard.cohere.com/api-keys
# Get your API key: https://dashboard.cohere.com/api-keys
co = cohere.Client("COHERE_API_KEY")
```

## Creating a custom preamble
Expand All @@ -48,15 +49,14 @@ In the example below, the preamble provides context for the assistant's task (ta
message = "I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates."

# Create a custom preamble
preamble="""## Task and Context
preamble = """## Task and Context
You are an assistant who assist new employees of Co1t with their first week.
## Style Guide
Try to speak in rhymes as much as possible. Be professional."""

# Generate the response
response = co.chat(message=message,
preamble=preamble)
response = co.chat(message=message, preamble=preamble)

print(response.text)
```
Expand Down Expand Up @@ -107,12 +107,11 @@ Here, we are also adding a custom preamble for generating concise response, just
message = "I'm joining a new startup called Co1t today. Could you help me write a short introduction message to my teammates."

# Create a custom preamble
preamble="""## Task & Context
preamble = """## Task & Context
Generate concise responses, with maximum one-sentence."""

# Generate the response
response = co.chat(message=message,
preamble=preamble)
response = co.chat(message=message, preamble=preamble)

print(response.text)
```
Expand All @@ -136,9 +135,11 @@ Looking at the response, we see that the model is able to get the context from t
message = "Make it more upbeat and conversational."

# Generate the response with the current chat history as the context
response = co.chat(message=message,
preamble=preamble,
chat_history=response.chat_history)
response = co.chat(
message=message,
preamble=preamble,
chat_history=response.chat_history,
)

print(response.text)
```
Expand All @@ -157,12 +158,16 @@ You can continue doing this for any number of turns by passing the most recent `

```python PYTHON
# Add the user message
message = "Thanks. Could you create another one for my DM to my manager."
message = (
"Thanks. Could you create another one for my DM to my manager."
)

# Generate the response with the current chat history as the context
response = co.chat(message=message,
preamble=preamble,
chat_history=response.chat_history)
response = co.chat(
message=message,
preamble=preamble,
chat_history=response.chat_history,
)

print(response.text)
```
Expand All @@ -178,8 +183,8 @@ To look at the current chat history, you can print the `response.chat_history` o
```python PYTHON
# View the chat history
for turn in response.chat_history:
print("Role:",turn.role)
print("Message:",turn.message,"\n")
print("Role:", turn.role)
print("Message:", turn.message, "\n")
```

```
Expand Down
Loading

0 comments on commit 6975ea7

Please sign in to comment.