Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Update .codegen.json with commit hash of codegen and openapi spec #489

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .codegen.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{ "engineHash": "22f85cc", "specHash": "f20ba3f", "version": "1.12.0" }
{ "engineHash": "5c674a3", "specHash": "06fc5f7", "version": "1.12.0" }
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
<img src="https://github.com/box/sdks/blob/master/images/box-dev-logo.png" alt= “box-dev-logo” width="30%" height="50%">
</p>

# Box Python SDK GENERATED
# Box Python SDK Gen

[![Project Status](http://opensource.box.com/badges/active.svg)](http://opensource.box.com/badges)
![build](https://github.com/box/box-python-sdk-gen/actions/workflows/build.yml/badge.svg)
Expand All @@ -28,7 +28,7 @@ Embrace the new generation of Box SDKs and unlock the full potential of the Box
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

- [Box Python SDK GENERATED](#box-python-sdk-generated)
- [Box Python SDK Gen](#box-python-sdk-gen)
- [Table of contents](#table-of-contents)
- [Installing](#installing)
- [Getting Started](#getting-started)
Expand Down
30 changes: 13 additions & 17 deletions box_sdk_gen/managers/ai.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,23 +226,19 @@ def create_ai_ask(
extra_headers: Optional[Dict[str, Optional[str]]] = None
) -> Optional[AiResponseFull]:
"""
Sends an AI request to supported LLMs and returns an answer specifically focused on the user's question given the provided context.
:param mode: The mode specifies if this request is for a single or multiple items. If you select `single_item_qa` the `items` array can have one element only. Selecting `multiple_item_qa` allows you to provide up to 25 items.
:type mode: CreateAiAskMode
:param prompt: The prompt provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
:type prompt: str
:param items: The items to be processed by the LLM, often files.

**Note**: Box AI handles documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first.
If the file size exceeds 1MB, the first 1MB of text representation will be processed.
If you set `mode` parameter to `single_item_qa`, the `items` array can have one element only.
:type items: List[AiItemAsk]
:param dialogue_history: The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response., defaults to None
:type dialogue_history: Optional[List[AiDialogueHistory]], optional
:param include_citations: A flag to indicate whether citations should be returned., defaults to None
:type include_citations: Optional[bool], optional
:param extra_headers: Extra headers that will be included in the HTTP request., defaults to None
:type extra_headers: Optional[Dict[str, Optional[str]]], optional
Sends an AI request to supported LLMs and returns an answer specifically focused on the user's question given the provided context.
:param mode: Box AI handles text documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first. If the text file size exceeds 1MB, the first 1MB of text representation will be processed. Box AI handles image documents with a resolution of 1024 x 1024 pixels, with a maximum of 5 images or 5 pages for multi-page images. If the number of image or image pages exceeds 5, the first 5 images or pages will be processed. If you set mode parameter to `single_item_qa`, the items array can have one element only. Currently Box AI does not support multi-modal requests. If both images and text are sent Box AI will only process the text.
:type mode: CreateAiAskMode
:param prompt: The prompt provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
:type prompt: str
:param items: The items to be processed by the LLM, often files.
:type items: List[AiItemAsk]
:param dialogue_history: The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response., defaults to None
:type dialogue_history: Optional[List[AiDialogueHistory]], optional
:param include_citations: A flag to indicate whether citations should be returned., defaults to None
:type include_citations: Optional[bool], optional
:param extra_headers: Extra headers that will be included in the HTTP request., defaults to None
:type extra_headers: Optional[Dict[str, Optional[str]]], optional
"""
if extra_headers is None:
extra_headers = {}
Expand Down
24 changes: 10 additions & 14 deletions box_sdk_gen/schemas/ai_ask.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,20 +37,16 @@ def __init__(
**kwargs
):
"""
:param mode: The mode specifies if this request is for a single or multiple items. If you select `single_item_qa` the `items` array can have one element only. Selecting `multiple_item_qa` allows you to provide up to 25 items.
:type mode: AiAskModeField
:param prompt: The prompt provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
:type prompt: str
:param items: The items to be processed by the LLM, often files.

**Note**: Box AI handles documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first.
If the file size exceeds 1MB, the first 1MB of text representation will be processed.
If you set `mode` parameter to `single_item_qa`, the `items` array can have one element only.
:type items: List[AiItemAsk]
:param dialogue_history: The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response., defaults to None
:type dialogue_history: Optional[List[AiDialogueHistory]], optional
:param include_citations: A flag to indicate whether citations should be returned., defaults to None
:type include_citations: Optional[bool], optional
:param mode: Box AI handles text documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first. If the text file size exceeds 1MB, the first 1MB of text representation will be processed. Box AI handles image documents with a resolution of 1024 x 1024 pixels, with a maximum of 5 images or 5 pages for multi-page images. If the number of image or image pages exceeds 5, the first 5 images or pages will be processed. If you set mode parameter to `single_item_qa`, the items array can have one element only. Currently Box AI does not support multi-modal requests. If both images and text are sent Box AI will only process the text.
:type mode: AiAskModeField
:param prompt: The prompt provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
:type prompt: str
:param items: The items to be processed by the LLM, often files.
:type items: List[AiItemAsk]
:param dialogue_history: The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response., defaults to None
:type dialogue_history: Optional[List[AiDialogueHistory]], optional
:param include_citations: A flag to indicate whether citations should be returned., defaults to None
:type include_citations: Optional[bool], optional
"""
super().__init__(**kwargs)
self.mode = mode
Expand Down
4 changes: 2 additions & 2 deletions docs/ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ client.ai.create_ai_ask(
### Arguments

- mode `CreateAiAskMode`
- The mode specifies if this request is for a single or multiple items. If you select `single_item_qa` the `items` array can have one element only. Selecting `multiple_item_qa` allows you to provide up to 25 items.
- Box AI handles text documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first. If the text file size exceeds 1MB, the first 1MB of text representation will be processed. Box AI handles image documents with a resolution of 1024 x 1024 pixels, with a maximum of 5 images or 5 pages for multi-page images. If the number of image or image pages exceeds 5, the first 5 images or pages will be processed. If you set mode parameter to `single_item_qa`, the items array can have one element only. Currently Box AI does not support multi-modal requests. If both images and text are sent Box AI will only process the text.
- prompt `str`
- The prompt provided by the client to be answered by the LLM. The prompt's length is limited to 10000 characters.
- items `List[AiItemAsk]`
- The items to be processed by the LLM, often files. **Note**: Box AI handles documents with text representations up to 1MB in size, or a maximum of 25 files, whichever comes first. If the file size exceeds 1MB, the first 1MB of text representation will be processed. If you set `mode` parameter to `single_item_qa`, the `items` array can have one element only.
- The items to be processed by the LLM, often files.
- dialogue_history `Optional[List[AiDialogueHistory]]`
- The history of prompts and answers previously passed to the LLM. This provides additional context to the LLM in generating the response.
- include_citations `Optional[bool]`
Expand Down
Loading