diff --git a/docs/docs/metrics-introduction.mdx b/docs/docs/metrics-introduction.mdx index 7b2b62ca6..92a85dcdc 100644 --- a/docs/docs/metrics-introduction.mdx +++ b/docs/docs/metrics-introduction.mdx @@ -31,7 +31,7 @@ All of `deepeval`'s default metrics output a score between 0-1, and require a `m Our suggestion is to begin with custom LLM evaluated metrics (which frequently surpass and offer more versatility than leading NLP models), and gradually transition to `deepeval`'s default metrics when feasible. We recommend using default metrics as an optimization to your evaluation workflow because they are more cost-effective. ::: -## Executing a Metric +## Measuring a Metric All metrics in `deepeval`, including [custom metrics that you create](metrics-custom):