Skip to content

Commit

Permalink
Fix docs
Browse files Browse the repository at this point in the history
  • Loading branch information
penguine-ip committed Dec 27, 2023
1 parent 87aa422 commit 97076e1
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion docs/docs/confident-ai-introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Continuous evaluation refers to the process of evaluating LLM applications in no
/>
</div>

Everything in `deepeval` is already automatically integrated with Confident AI, including `deepeval`'s [custom metrics](evaluation-metrics#custom-metrics). To start using Confident AI with `deepeval`, simply login in the CLI:
Everything in `deepeval` is already automatically integrated with Confident AI, including `deepeval`'s [custom metrics](metrics-custom). To start using Confident AI with `deepeval`, simply login in the CLI:

```
deepeval login
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/evaluation-test-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ test_case = LLMTestCase(

An expected output is literally what you would want the ideal output to be. Note that this parameter is **optional** depending on the metric you want to evaluate.

The expected output doesn't have to exactly match the actual output in order for your test case to pass since `deepeval` uses a variety of methods to evaluate non-deterministic LLM outputs. We'll go into more details [in the metrics section.](evaluation-metrics)
The expected output doesn't have to exactly match the actual output in order for your test case to pass since `deepeval` uses a variety of methods to evaluate non-deterministic LLM outputs. We'll go into more details [in the metrics section.](metrics-introduction)

```python
# A hypothetical LLM application example
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ In your newly created virtual environement, run:
pip install -U deepeval
```

You can also keep track of all evaluation results by logging into our [in all one evaluation platform](https://confident-ai.com), and use Confident AI's [proprietary LLM evaluation agent](evaluation-metrics#judgementalgpt) for evaluation:
You can also keep track of all evaluation results by logging into our [in all one evaluation platform](https://confident-ai.com), and use Confident AI's [proprietary LLM evaluation agent](metrics-judgemental) for evaluation:

```console
deepeval login
Expand Down

0 comments on commit 97076e1

Please sign in to comment.