Skip to content

Commit

Permalink
AI Assistant docs updates for 8.16 (#4326)
Browse files Browse the repository at this point in the history
* initial structure change to add settings and search connectors

* enterprise search requirement added

* search connectors explained

* fixed image link

* extra comma removed from json example

* ai settings moved to the end

* tbd content added

* screenshots deleted

* some you-cans removed

* connectors distinction included

* minor updates

* AI Assistant icon added

* AI Assistant icon added

* AI Assistant icon added

* search connectors setup added

* passive voice and to_do_this update

* reindex method changes cancelled

* override search connector indices list

* reindex method removed

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

* missing link added

* Update docs/en/observability/observability-ai-assistant.asciidoc

Co-authored-by: Arianna Laudazzi <[email protected]>

---------

Co-authored-by: Arianna Laudazzi <[email protected]>
(cherry picked from commit 20cc3dc)

# Conflicts:
#	docs/en/observability/observability-ai-assistant.asciidoc
  • Loading branch information
eedugon authored and mergify[bot] committed Nov 7, 2024
1 parent 74b6b27 commit 302805b
Show file tree
Hide file tree
Showing 4 changed files with 88 additions and 55 deletions.
Binary file not shown.
1 change: 1 addition & 0 deletions docs/en/observability/images/icons/ai-assistant-bw.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/en/observability/images/icons/ai-assistant.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
141 changes: 86 additions & 55 deletions docs/en/observability/observability-ai-assistant.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -36,11 +36,19 @@ The AI assistant requires the following:

* {stack} version 8.9 and later.
* An https://www.elastic.co/pricing[Enterprise subscription].
<<<<<<< HEAD
* An account with a third-party generative AI provider that preferably supports function calling.
If your AI provider does not support function calling, you can configure AI Assistant settings under **Stack Management** to simulate function calling, but this might affect performance.
+
Refer to the {kibana-ref}/action-types.html[connector documentation] for your provider to learn about supported and default models.

=======
* An account with a third-party generative AI provider that supports function calling. The Observability AI Assistant supports the following providers:
** OpenAI `gpt-4`+.
** Azure OpenAI Service `gpt-4`(0613) or `gpt-4-32k`(0613) with API version `2023-07-01-preview` or more recent.
** AWS Bedrock, specifically the Anthropic Claude models.
* An {enterprise-search-ref}/server.html[Enterprise Search] server if {enterprise-search-ref}/connectors.html[search connectors] are used to populate external data into the knowledge base.
>>>>>>> 20cc3dc7 (AI Assistant docs updates for 8.16 (#4326))
* The knowledge base requires a 4 GB {ml} node.
[IMPORTANT]
Expand Down Expand Up @@ -99,7 +107,7 @@ To set up the AI Assistant:
any knowledge base articles you created before 8.12 will have to be reindexed or upgraded before they can be used.
Knowledge base articles created before 8.12 use ELSER v1.
In 8.12, knowledge base articles must use ELSER v2.
You can either:
Options include:

* Clear all old knowledge base articles manually and reindex them.
* Upgrade all knowledge base articles indexed with ELSER v1 to ELSER v2 using a https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/model-upgrades/upgrading-index-to-use-elser.ipynb[Python script].
Expand All @@ -109,13 +117,16 @@ The AI Assistant uses {ml-docs}/ml-nlp-elser.html[ELSER], Elastic's semantic sea
NOTE: Your AI provider may collect telemetry when using the AI Assistant. Contact your AI provider for information on how data is collected.
You can add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.
Add data to the knowledge base with one or more of the following methods:
You can also add external data to the knowledge base either in {kib} using the Stack Management UI or using the {es} Index API.
* <<obs-ai-kb-ui>> available at <<obs-ai-settings>> page.
* <<obs-ai-search-connectors>>
You can also add information to the knowledge base by asking the AI Assistant to remember something while chatting (for example, "remember this for next time"). The assistant will create a summary of the information and add it to the knowledge base.
[discrete]
[[obs-ai-stack-management]]
=== Use the UI
[[obs-ai-kb-ui]]
=== Use the knowledge base UI
To add external data to the knowledge base in {kib}:
Expand All @@ -132,55 +143,65 @@ To add external data to the knowledge base in {kib}:
----
{
"id": "a_unique_human_readable_id",
"text": "Contents of item",
"text": "Contents of item"
}
----
[discrete]
[[obs-ai-index-api]]
=== Use the {es} Index API

. Ingest external data (GitHub issues, Markdown files, Jira tickets, text files, etc.) into {es} using the {es} {ref}/docs-index_.html[Index API].
. Reindex your data into the AI Assistant's knowledge base index by completing the following query in *Management* -> *Dev Tools* in {kib}. Update the following fields before reindexing:
** `InternalDocsIndex` — name of the index where your internal documents are stored.
** `text_field` — name of the field containing your internal documents' text.
** `timestamp` — name of the timestamp field in your internal documents.
** `public` — (`true` or `false`) if `true`, the document is available to users in the space defined in the following `space` field or in all spaces if no `space` is defined. If `false`, the document is restricted to the user indicated in the following `user.name` field.
** `space` — (can be `null`) if defined, restricts the internal document's availability to a specific {kib} space.
** `user.name` — (can be `null`) if defined, restricts the internal document's availability to a specific user.
** You can add a query filter to index specific documents.

[source,console]
----
POST _reindex
{
"source": {
"index": "<InternalDocsIndex>",
"_source": [
"<text_field>",
"<timestamp>",
"namespace",
"is_correction",
"public",
"confidence"
]
},
"dest": {
"index": ".kibana-observability-ai-assistant-kb-000001",
"pipeline": ".kibana-observability-ai-assistant-kb-ingest-pipeline"
},
"script": {
"inline": "ctx._source.text = ctx._source.remove(\"<text_field>\");ctx._source.namespace=\"<space>\";ctx._source.is_correction=false;ctx._source.public=<public>;ctx._source.confidence=\"high\";ctx._source['@timestamp'] = ctx._source.remove(\"<timestamp>\");ctx._source['user.name'] = \"<user.name>\""
}
}
----
[[obs-ai-search-connectors]]
=== Use search connectors
[TIP]
====
The {enterprise-search-ref}/connectors.html[search connectors] described in this section differ from the {kibana-ref}/action-types.html[Stack management -> Connectors] configured during the <<obs-ai-set-up, AI Assistant setup>>.
Search connectors are only needed when importing external data into the Knowledge base of the AI Assistant, while the stack connector to the LLM is required for the AI Assistant to work.
====
{enterprise-search-ref}/connectors.html[Connectors] allow you to index content from external sources thereby making it available for the AI Assistant. This can greatly improve the relevance of the AI Assistant’s responses. Data can be integrated from sources such as GitHub, Confluence, Google Drive, Jira, AWS S3, Microsoft Teams, Slack, and more.
These connectors are managed under *Search* -> *Content* -> *Connectors* in {kib}, they are outside of the {observability} Solution, and they require an {enterprise-search-ref}/server.html[Enterprise Search] server connected to the Elastic Stack.
By default, the AI Assistant queries all search connector indices. To override this behavior and customize which indices are queried, adjust the *Search connector index pattern* setting on the <<obs-ai-settings>> page. This allows precise control over which data sources are included in AI Assistant knowledge base.
To create a connector and make its content available to the AI Assistant knowledge base, follow these steps:
. In {kib} UI, go to *Search* -> *Content* -> *Connectors* and follow the instructions to create a new connector.
+
[NOTE]
====
If your {kib} Space doesn't include the `Search` solution you will have to create the connector from a different space or change your space *Solution view* setting to `Classic`.
====
+
For example, if you create a {enterprise-search-ref}/connectors-github.html[GitHub native connector] you have to set a `name`, attach it to a new or existing `index`, add your `personal access token` and include the `list of repositories` to synchronize.
+
Learn more about configuring and {enterprise-search-ref}/connectors-usage.html[using connectors] in the Enterprise Search documentation.
+
. Create a pipeline and process the data with ELSER.
+
To create the embeddings needed by the AI Assistant (weights and tokens into a sparse vector field), you have to create an *ML Inference Pipeline*:
+
.. Open the previously created connector and select the *Pipelines* tab.
.. Select *Copy and customize* button at the `Unlock your custom pipelines` box.
.. Select *Add Inference Pipeline* button at the `Machine Learning Inference Pipelines` box.
.. Select *ELSER (Elastic Learned Sparse EncodeR)* ML model to add the necessary embeddings to the data.
.. Select the fields that need to be evaluated as part of the inference pipeline.
.. Test and save the inference pipeline and the overall pipeline.
. Sync the data.
+
Once the pipeline is set up, perform a *Full Content Sync* of the connector. The inference pipeline will process the data as follows:
+
* As data comes in, ELSER is applied to the data, and embeddings (weights and tokens into a sparse vector field) are added to capture semantic meaning and context of the data.
* When you look at the documents that are ingested, you can see how the weights and token are added to the `predicted_value` field in the documents.
. Check if AI Assistant can use the index (optional).
+
Ask something to the AI Assistant related with the indexed data.
[discrete]
[[obs-ai-interact]]
== Interact with the AI Assistant
You can chat with the AI Assistant or interact with contextual insights located throughout {observability}.
See the following sections for more on interacting with the AI Assistant.
Chat with the AI Assistant or interact with contextual insights located throughout {observability}.
Check the following sections for more on interacting with the AI Assistant.
TIP: After every answer the LLM provides, let us know if the answer was helpful.
Your feedback helps us improve the AI Assistant!
Expand All @@ -189,10 +210,7 @@ Your feedback helps us improve the AI Assistant!
[[obs-ai-chat]]
=== Chat with the assistant
Click *AI Assistant* in the upper-right corner of any {observability} application to start the chat:

[role="screenshot"]
image::images/ai-assistant-button.png[Observability AI assistant preview]
Select the *AI Assistant* icon (image:images/icons/ai-assistant.svg[AI Assistant icon]) at the upper-right corner of any {observability} application to start the chat:
This opens the AI Assistant flyout, where you can ask the assistant questions about your instance:
Expand All @@ -214,7 +232,7 @@ beta::[]
The AI Assistant uses functions to include relevant context in the chat conversation through text, data, and visual components. Both you and the AI Assistant can suggest functions. You can also edit the AI Assistant's function suggestions and inspect function responses.
You can suggest the following functions:
Main functions:
[horizontal]
`alerts`:: Get alerts for {observability}.
Expand Down Expand Up @@ -256,14 +274,13 @@ Clicking a prompt generates a message specific to that log entry:
[role="screenshot"]
image::images/obs-ai-logs.gif[Observability AI assistant example, 75%]
You can continue a conversation from a contextual prompt by clicking *Start chat* to open the AI Assistant chat.
Continue a conversation from a contextual prompt by clicking *Start chat* to open the AI Assistant chat.
[discrete]
[[obs-ai-connector]]
=== Add the AI Assistant connector to alerting workflows
You can use the {kibana-ref}/obs-ai-assistant-action-type.html[Observability AI Assistant connector] to add AI-generated insights and custom actions to your alerting workflows.
To do this:
Use the {kibana-ref}/obs-ai-assistant-action-type.html[Observability AI Assistant connector] to add AI-generated insights and custom actions to your alerting workflows as follows:
. <<create-alerts-rules,Create (or edit) an alerting rule>> and specify the conditions that must be met for the alert to fire.
. Under **Actions**, select the **Observability AI Assistant** connector type.
Expand All @@ -280,7 +297,7 @@ and also include other active alerts that may be related.
As a last step, you can ask the assistant to trigger an action,
such as sending the report (or any other message) to a Slack webhook.
NOTE: Currently you can only send messages to Slack, email, Jira, PagerDuty, or a webhook.
NOTE: Currently only Slack, email, Jira, PagerDuty, or webhook actions are supported.
Additional actions will be added in the future.
When the alert fires, contextual details about the event—such as when the alert fired,
Expand Down Expand Up @@ -313,6 +330,20 @@ The Observability AI Assistant connector is called when the alert fires and when
To learn more about alerting, actions, and connectors, refer to <<create-alerts>>.
[discrete]
[[obs-ai-settings]]
== AI Assistant Settings
You can access the AI Assistant Settings page:
* From *{stack-manage-app}* -> *Kibana* -> *AI Assistants* -> *Elastic AI Assistant for Observability*.
* From the *More actions* menu inside the AI Assistant window.
The AI Assistant Settings page contains the following tabs:
* *Settings*: Configures the main AI Assistant settings, which are explained directly within the interface.
* *Knowledge base*: Manages <<obs-ai-kb-ui,knowledge base entries>>.
* *Search Connectors*: Provides a link to {kib} *Search* -> *Content* -> *Connectors* UI for connectors configuration.
[discrete]
[[obs-ai-known-issues]]
== Known issues
Expand All @@ -324,5 +355,5 @@ To learn more about alerting, actions, and connectors, refer to <<create-alerts>
Most LLMs have a set number of tokens they can manage in single a conversation.
When you reach the token limit, the LLM will throw an error, and Elastic will display a "Token limit reached" error in Kibana.
The exact number of tokens that the LLM can support depends on the LLM provider and model you're using.
If you are using an OpenAI connector, you can monitor token usage in **OpenAI Token Usage** dashboard.
If you use an OpenAI connector, monitor token utilization in **OpenAI Token Usage** dashboard.
For more information, refer to the {kibana-ref}/openai-action-type.html#openai-connector-token-dashboard[OpenAI Connector documentation].

0 comments on commit 302805b

Please sign in to comment.