diff --git a/fern/pages/v2/tool-use/tool-use-citations.mdx b/fern/pages/v2/tool-use/tool-use-citations.mdx
new file mode 100644
index 00000000..93481ebc
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-citations.mdx
@@ -0,0 +1,31 @@
+---
+title: "Citations for tool use"
+slug: "v2/docs/tool-use-citations"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+
+## Accessing citations
+[[TODO - describe citations and how to access them, what do they contain]]
+
+### Non-streaming
+[[TODO - show how to access citations from the response object in a non-streaming scenario]]
+
+### Streaming
+[[TODO - show how to access citations from the response object in a streaming scenario]]
+
+## Citation modes
+[[TODO - describe citation modes - fast, accurate, etc]]
+
+### Accurate citations
+[[TODO - show code example of fast citations]]
+
+### Fast citations
+[[TODO - show code example of fast citations]]
diff --git a/fern/pages/v2/tool-use/tool-use-faqs.mdx b/fern/pages/v2/tool-use/tool-use-faqs.mdx
new file mode 100644
index 00000000..feadaa67
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-faqs.mdx
@@ -0,0 +1,14 @@
+---
+title: "Tool use - FAQs"
+slug: "v2/docs/tool-use-faqs"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+[[TODO - FAQs]]
\ No newline at end of file
diff --git a/fern/pages/v2/tool-use/tool-use-multi-step.mdx b/fern/pages/v2/tool-use/tool-use-multi-step.mdx
new file mode 100644
index 00000000..c8390259
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-multi-step.mdx
@@ -0,0 +1,27 @@
+---
+title: "Multi-step tool use (agents)"
+slug: "v2/docs/tool-use-multi-step"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+## Overview
+[[TODO - describe multi-step tool use (agents)]]
+
+## State management
+[[TODO - describe how the messages list construction for multi-step differs from single-step]]
+
+## Multi-step reasoning
+[[TODO - example multi-step tool use wrt multi-step reasoning]]
+
+## Self-correction
+[[TODO - example multi-step tool use wrt self-correction]]
+
+## Multi-step, parallel tool use
+[[TODO - example of multi-step, parallel tool use]]
\ No newline at end of file
diff --git a/fern/pages/v2/tool-use/tool-use-overview.mdx b/fern/pages/v2/tool-use/tool-use-overview.mdx
new file mode 100644
index 00000000..4a771ae2
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-overview.mdx
@@ -0,0 +1,244 @@
+---
+title: "Tool use - basic usage"
+slug: "v2/docs/tool-use-overview"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+
+## Overview
+
+Tool use is a technique which allows developers to connect Cohere’s Command R family of models to external tools like search engines, APIs, functions, databases, etc.
+
+This opens up a richer set of behaviors by leveraging data stored in tools, taking actions through APIs, interacting with a vector database, querying a search engine, etc., and is particularly valuable for enterprise developers, since a lot of enterprise data lives in external sources.
+
+The Chat endpoint comes with built-in tool use capabilities such as function calling, multi-step reasoning, and citation generation.
+
+
+
+## Setup
+
+First, import the Cohere library and create a client.
+
+```python PYTHON
+# ! pip install -U cohere
+import cohere
+
+co = cohere.ClientV2("COHERE_API_KEY") # Get your free API key here: https://dashboard.cohere.com/api-keys
+```
+
+## Tool definition
+
+The pre-requisite, or Step 0, before we can run a tool use workflow, is to define the tools. We can break this further into two steps:
+
+- Creating the tool
+- Defining the tool schema
+
+
+
+### Creating the tool
+
+A tool can be any function that you create or extrenal services that return an object for a given input. Some examples: a web search engine, an email service, an SQL database, a vector database, a weather data service, a sports data service, or even another LLM.
+
+In this example, we define a `get_weather` function that returns the temperature for a given query, which is the location. You can implement any logic here, but to simplify the example, here we are hardcoding the return value to be the same for all queries.
+
+```python PYTHON
+def get_weather(location):
+ # Implement any logic here
+ return [{"temperature": "20C"}]
+ # Return a list of objects e.g. [{"url": "abc.com", "text": "..."}, {"url": "xyz.com", "text": "..."}]
+
+functions_map = {"get_weather": get_weather}
+```
+
+### Defining the tool schema
+
+We also need to define the tool schemas in a format that can be passed to the Chat endpoint. The schema follows the JSON Schema specification and must contain the following fields:
+- `name`: the name of the tool.
+- `description`: a description of what the tool is and what it is used for.
+- `parameters`: a list of parameters that the tool accepts. For each parameter, we need to define the following fields:
+ - `type`: the type of the parameter.
+ - `properties`: the name of the parameter and the following fields:
+ - `type`: the type of the parameter.
+ - `description`: a description of what the parameter is and what it is used for.
+ - `required`: a list of required parameters from the `properties` field.
+
+This schema informs the LLM about what the tool does, and the LLM decides whether to use a particular tool based on the information that it contains. Therefore, the more descriptive and clear the schema, the more likely the LLM will make the right tool call decisions.
+
+```python PYTHON
+tools = [
+ {
+ "type": "function",
+ "function": {
+ "name": "get_weather",
+ "description": "gets the weather of a given location",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "location": {
+ "type": "string",
+ "description": "the location to get weather, example: San Fransisco, CA",
+ }
+ },
+ "required": ["location"],
+ },
+ },
+ },
+]
+```
+
+## Tool use workflow
+
+We can think of a tool use system as consisting of four components:
+
+- The user
+- The application
+- The LLM
+- The tools
+
+At its most basic, these four components interact in a workflow through four steps:
+
+- Step 1: **Get user message**: The LLM gets the user message (via the application).
+- Step 2: **Generate tool calls**: The LLM decides which tools to call (if any) and generates the tool calls.
+- Step 3: **Get tool results**: The application executes the tools, and the results are sent to the LLM.
+- Step 4: **Generate response and citations**: The LLM generates the response and citations back to the user.
+
+
+
+### Step 1: Get user message
+In the first step, we get the user's message and put it in the `messages` list with the `role` set to `user`.
+
+```python PYTHON
+messages = [{"role": "user", "content": "What's the weather in Toronto?"}]
+```
+
+
+Optional: If you want to define a system message, you can add it to the `messages` list with the `role` set to `system`.
+
+```python PYTHON
+system_message = """## Task & Context
+You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
+
+## Style Guide
+Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
+"""
+
+messages = [
+ {"role": "system", "content": system_message},
+ {"role": "user", "content": "What's the weather in Toronto?"},
+]
+```
+
+
+
+### Step 2: Generate tool calls
+
+Next, we call the Chat endpoint to generate the list of tool calls. This is done by passing the parameters `model`, `messages`, and `tools` to the Chat endpoint.
+
+The endpoint will send back a list of tool calls to be made if the model determines that tools are required. If it does, it will return two types of messages:
+- `tool_plan`: its reflection on the next steps it should take, given the user query.
+- `tool_calls`: a list of tool calls to be made (if any), together with the tool call IDs.
+
+We then add this information to the `messages` list with the `role` set to `assistant`.
+
+```python PYTHON
+response = co.chat(
+ model="command-r-plus-08-2024",
+ messages=messages,
+ tools=tools
+)
+
+if response.message.tool_calls:
+ messages.append(
+ {
+ "role": "assistant",
+ "tool_calls": response.message.tool_calls,
+ "tool_plan": response.message.tool_plan,
+ }
+ )
+ print(response.message.tool_calls)
+```
+
+```mdx wordWrap
+[ToolCallV2(id='get_weather_776n8ctsgycn', type='function', function=ToolCallV2Function(name='get_weather', arguments='{"location":"Toronto"}'))]
+```
+
+### Step 3: Get tool results
+During this step, we perform the function calling. We call the necessary tools based on the tool call payloads given by the endpoint.
+
+For each tool call, we append the tool results to the `tool_content` list with `type` set to `document` and `document` set to the tool results (in JSON string format).
+
+We then add this information to the `messages` list with the `role` set to `tool`, together with the tool call IDs that were generated in the previous step.
+
+```python PYTHON
+import json
+
+if response.message.tool_calls:
+ for tc in response.message.tool_calls:
+ tool_result = functions_map[tc.function.name](
+ **json.loads(tc.function.arguments)
+ )
+ tool_content = []
+ for data in tool_result:
+ tool_content.append({"type": "document", "document": {"data": json.dumps(data)}})
+ # Optional: add an "id" field in the "document" object, otherwise IDs are auto-generated
+ messages.append(
+ {"role": "tool", "tool_call_id": tc.id, "content": tool_content}
+ )
+```
+
+### Step 4: Generate response and citations
+By this time, the tool call has already been executed, and the result has been returned to the LLM.
+
+In this step, we call the Chat endpoint to generate the response to the user, again by passing the parameters `model`, `messages` (which has now been updated with information fromthe tool calling and tool execution steps), and `tools`.
+
+The model generates a response to the user, grounded on the information provided by the tool.
+
+It also generates fine-grained citations, which are included out-of-the-box with the Command family of models. Here, we see the model generating two citations, one for each specific span in its response, where it uses the tool result to answer the question.
+
+```python PYTHON
+response = co.chat(
+ model="command-r-plus-08-2024",
+ messages=messages,
+ tools=tools
+)
+print(response.message.content[0].text)
+```
+```mdx wordWrap
+It is 20C in Toronto.
+```
+```mdx wordWrap
+start=6 end=9 text='20C' sources=[ToolSource(type='tool', id='get_weather_776n8ctsgycn:0', tool_output={'temperature': '20C'})]
+```
+
+## Parallel tool calling
+[[TODO - demonstrate example of parallel tool calling]]
+
+## Directly answering
+[[TODO - describe the scenario where the LLM decides not to use tools but instead directly answers the user]]
+
+## Forcing tool usage
+[[TODO - describe the tool choice parameter]]
+
+## Response object
+
+### Tool calling step
+[[TODO - describe the response object for tool calling step]]
+
+### Response generation step
+[[TODO - describe the response object for response generation step]]
+
+## State management
+[[TODO - describe the state management via the messages list - single and multi turn. show examples]]
+
+### Single turn
+[[TODO - describe state management wrt single turn scenarios]]
+
+### Multi turn
+[[TODO - describe state management wrt multi turn scenarios]]
diff --git a/fern/pages/v2/tool-use/tool-use-parameter-types.mdx b/fern/pages/v2/tool-use/tool-use-parameter-types.mdx
new file mode 100644
index 00000000..f3f1c71d
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-parameter-types.mdx
@@ -0,0 +1,16 @@
+---
+title: "Tool use parameter types"
+slug: "v2/docs/tool-use-parameter-types"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+
+### Parameter types
+[[TODO - copy and paste this page - https://docs.cohere.com/docs/parameter-types-in-tool-use]]
\ No newline at end of file
diff --git a/fern/pages/v2/tool-use/tool-use-streaming.mdx b/fern/pages/v2/tool-use/tool-use-streaming.mdx
new file mode 100644
index 00000000..0480a13c
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-streaming.mdx
@@ -0,0 +1,21 @@
+---
+title: "Streaming for tool use"
+slug: "v2/docs/tool-use-streaming"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+## Overview
+[[TODO - describe streaming in the tool use context - e.g. event types]]
+
+## Tool calling step
+[[TODO - describe handling streaming objects for the tool calling step]]
+
+## Response generation step
+[[TODO - describe handling streaming objects for the response generation step]]
\ No newline at end of file
diff --git a/fern/pages/v2/tool-use/tool-use-structured-outputs.mdx b/fern/pages/v2/tool-use/tool-use-structured-outputs.mdx
new file mode 100644
index 00000000..8df3be57
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-structured-outputs.mdx
@@ -0,0 +1,14 @@
+---
+title: "Structured outputs for tool use"
+slug: "v2/docs/tool-use-structured-outputs"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+TBD
\ No newline at end of file
diff --git a/fern/pages/v2/tool-use/tool-use-tool-definition.mdx b/fern/pages/v2/tool-use/tool-use-tool-definition.mdx
new file mode 100644
index 00000000..a654ba48
--- /dev/null
+++ b/fern/pages/v2/tool-use/tool-use-tool-definition.mdx
@@ -0,0 +1,35 @@
+---
+title: "Tool use - tool definition"
+slug: "v2/docs/tool-use-definition"
+
+hidden: false
+description: >-
+ TBD
+image: "../../../assets/images/4a5325a-cohere_meta_image.jpg"
+keywords: "Cohere, text generation, LLMs, generative AI"
+
+createdAt: "Thu Feb 29 2024 18:05:29 GMT+0000 (Coordinated Universal Time)"
+updatedAt: "Tue Jun 18 2024 07:20:15 GMT+0000 (Coordinated Universal Time)"
+---
+## Overview
+[[TODO - elaborate from the basic usage section - which gave the simplest possible way to define a tool]]
+
+## Tool creation
+
+### Example: Custom functions
+[[TODO - an example of a tool with basic working logic - e.g. sales database]]
+
+### Example: External services
+[[TODO - an example of a tool with basic working logic - e.g. web search]]
+
+## Tool schema
+
+### JSON schema
+[[TODO - overview of the JSON schema]]
+
+### Tool schema prompting
+[[TODO - how to write good tool schema e.g. descriptions, etc]]
+
+## Structured outputs
+
+[[TODO - how to use the strict_tools parameter]]
\ No newline at end of file
diff --git a/fern/v2.yml b/fern/v2.yml
index cd433169..7f1c50b7 100644
--- a/fern/v2.yml
+++ b/fern/v2.yml
@@ -80,16 +80,33 @@ navigation:
- page: Retrieval Augmented Generation (RAG)
path: pages/v2/text-generation/retrieval-augmented-generation-rag.mdx
- section: Tool Use
- path: pages/v2/text-generation/tools.mdx
contents:
- - page: Tool Use
- path: pages/v2/text-generation/tools/tool-use.mdx
- - page: Multi-step Tool Use (Agents)
- path: pages/v2/text-generation/tools/multi-step-tool-use.mdx
- - page: Implementing a Multi-Step Agent with Langchain
- path: pages/v2/text-generation/tools/implementing-a-multi-step-agent-with-langchain.mdx
- - page: Parameter Types in Tool Use
- path: pages/v2/text-generation/tools/parameter-types-in-tool-use.mdx
+ - page: Basic usage
+ path: pages/v2/tool-use/tool-use-overview.mdx
+ - page: Multi-step (agents)
+ path: pages/v2/tool-use/tool-use-multi-step.mdx
+ - page: Tool definition
+ path: pages/v2/tool-use/tool-use-tool-definition.mdx
+ - page: Parameter types
+ path: pages/v2/tool-use/tool-use-parameter-types.mdx
+ - page: Streaming
+ path: pages/v2/tool-use/tool-use-streaming.mdx
+ - page: Citations
+ path: pages/v2/tool-use/tool-use-citations.mdx
+ - page: FAQs
+ path: pages/v2/tool-use/tool-use-faqs.mdx
+
+ # - section: Tool Use
+ # path: pages/v2/text-generation/tools.mdx
+ # contents:
+ # - page: Tool Use
+ # path: pages/v2/text-generation/tools/tool-use.mdx
+ # - page: Multi-step Tool Use (Agents)
+ # path: pages/v2/text-generation/tools/multi-step-tool-use.mdx
+ # - page: Implementing a Multi-Step Agent with Langchain
+ # path: pages/v2/text-generation/tools/implementing-a-multi-step-agent-with-langchain.mdx
+ # - page: Parameter Types in Tool Use
+ # path: pages/v2/text-generation/tools/parameter-types-in-tool-use.mdx
- page: Tokens and Tokenizers
path: pages/v2/text-generation/tokens-and-tokenizers.mdx
- section: Prompt Engineering