From 97076e1fb504166d1293bcf17aac9c04a43e493d Mon Sep 17 00:00:00 2001 From: Jeffrey Ip Date: Thu, 28 Dec 2023 00:53:37 +0800 Subject: [PATCH] Fix docs --- docs/docs/confident-ai-introduction.mdx | 2 +- docs/docs/evaluation-test-cases.mdx | 2 +- docs/docs/getting-started.mdx | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/docs/confident-ai-introduction.mdx b/docs/docs/confident-ai-introduction.mdx index 7cc3a0bcb..bd6f69221 100644 --- a/docs/docs/confident-ai-introduction.mdx +++ b/docs/docs/confident-ai-introduction.mdx @@ -32,7 +32,7 @@ Continuous evaluation refers to the process of evaluating LLM applications in no /> -Everything in `deepeval` is already automatically integrated with Confident AI, including `deepeval`'s [custom metrics](evaluation-metrics#custom-metrics). To start using Confident AI with `deepeval`, simply login in the CLI: +Everything in `deepeval` is already automatically integrated with Confident AI, including `deepeval`'s [custom metrics](metrics-custom). To start using Confident AI with `deepeval`, simply login in the CLI: ``` deepeval login diff --git a/docs/docs/evaluation-test-cases.mdx b/docs/docs/evaluation-test-cases.mdx index be1ff5c98..e21f8e3c8 100644 --- a/docs/docs/evaluation-test-cases.mdx +++ b/docs/docs/evaluation-test-cases.mdx @@ -82,7 +82,7 @@ test_case = LLMTestCase( An expected output is literally what you would want the ideal output to be. Note that this parameter is **optional** depending on the metric you want to evaluate. -The expected output doesn't have to exactly match the actual output in order for your test case to pass since `deepeval` uses a variety of methods to evaluate non-deterministic LLM outputs. We'll go into more details [in the metrics section.](evaluation-metrics) +The expected output doesn't have to exactly match the actual output in order for your test case to pass since `deepeval` uses a variety of methods to evaluate non-deterministic LLM outputs. We'll go into more details [in the metrics section.](metrics-introduction) ```python # A hypothetical LLM application example diff --git a/docs/docs/getting-started.mdx b/docs/docs/getting-started.mdx index 811802a6a..a34f15793 100644 --- a/docs/docs/getting-started.mdx +++ b/docs/docs/getting-started.mdx @@ -34,7 +34,7 @@ In your newly created virtual environement, run: pip install -U deepeval ``` -You can also keep track of all evaluation results by logging into our [in all one evaluation platform](https://confident-ai.com), and use Confident AI's [proprietary LLM evaluation agent](evaluation-metrics#judgementalgpt) for evaluation: +You can also keep track of all evaluation results by logging into our [in all one evaluation platform](https://confident-ai.com), and use Confident AI's [proprietary LLM evaluation agent](metrics-judgemental) for evaluation: ```console deepeval login