From 6b4e6e9ea7cfb192f3cae9f97ac5528e128560a7 Mon Sep 17 00:00:00 2001 From: Jeffrey Ip <143328635+penguine-ip@users.noreply.github.com> Date: Fri, 12 Jan 2024 09:55:01 -0800 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 1e4afff12..62fcfc68a 100644 --- a/README.md +++ b/README.md @@ -120,7 +120,7 @@ deepeval test run test_chatbot.py Alternatively, you can evaluate without Pytest, which is more suited for a notebook environment. ```python -from deepeval import evalate +from deepeval import evaluate from deepeval.metrics import HallucinationMetric from deepeval.test_case import LLMTestCase @@ -135,7 +135,7 @@ test_case = LLMTestCase( actual_output=actual_output, context=context ) -evalate([test_case], [hallucination_metric]) +evaluate([test_case], [hallucination_metric]) ``` ## Evaluting a Dataset / Test Cases in Bulk