diff --git a/README.md b/README.md
index b7810b1b1..caaab076b 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,7 @@
-**DeepEval** is a simple-to-use, open-source evaluation framework for LLM applications. It is similar to Pytest but specialized for unit testing LLM applications. DeepEval evaluates performance based on metrics such as factual consistency, accuracy, answer relevancy, etc., using LLMs and various other NLP models. It's a production-ready alternative to RAGAS .
+**DeepEval** is a simple-to-use, open-source evaluation framework for LLM applications. It is similar to Pytest but specialized for unit testing LLM applications. DeepEval evaluates performance based on metrics such as hallucination, answer relevancy, RAGAS, etc., using LLMs and various other NLP models **locally on your machine**.
Whether your application is implemented via RAG or fine-tuning, LangChain or LlamaIndex, DeepEval has you covered. With it, you can easily determine the optimal hyperparameters to improve your RAG pipeline, prevent prompt drifting, or even transition from OpenAI to hosting your own Llama2 with confidence.
@@ -26,14 +26,27 @@ Whether your application is implemented via RAG or fine-tuning, LangChain or Lla
# Features
-- Large variety of ready-to-use evaluation metrics, ranging from LLM evaluated (G-Eval) to metrics computed via statistical methods or NLP models.
+- Large variety of ready-to-use evaluation metrics powered by LLMs, statistical methods, or NLP models that runs **locally on your machine**:
+ - Hallucination
+ - Answer Relevancy
+ - RAGAS
+ - G-Eval
+ - Toxicity
+ - Bias
+ - etc.
- Easily create your own custom metrics that are automatically integrated with DeepEval's ecosystem by inheriting DeepEval's base metric class.
-- Evaluate your entire dataset in bulk using fewer than 20 lines of Python code.
-- [Integrated with Confident AI](https://confident-ai.com) for instant observability into evaluation results and hyperparameter comparisons (such as prompt templates and model version used).
+- Evaluate your entire dataset in bulk using fewer than 20 lines of Python code **in parallel**.
+- [Automatically integrated with Confident AI](https://app.confident-ai.com) for continous evaluation throughout the lifetime of your LLM (app):
+ - log evaluation results and analyze metrics pass / fails
+ - compare and pick the optimal hyperparameters (eg. prompt templates, chunk size, models used, etc.) based on evaluation results
+ - debug evaluation results via LLM traces
+ - manage evaluation test cases / datasets in one place
+ - track events to identify live LLM responses in production
+ - add production events to existing evaluation datasets to strength evals over time
-# Getting Started 🚀
+# 🚀 Getting Started 🚀
Let's pretend your LLM application is a customer support chatbot; here's how DeepEval can help test what you've built.
@@ -43,9 +56,9 @@ Let's pretend your LLM application is a customer support chatbot; here's how Dee
pip install -U deepeval
```
-## [Optional] Create an account
+## Create an account (highly recommended)
-Creating an account on our platform will allow you to log test results, enabling easy tracking of changes and performances over iterations. This step is optional, and you can run test cases even without logging in, but we highly recommend giving it a try.
+Although optional, creating an account on our platform will allow you to log test results, enabling easy tracking of changes and performances over iterations. This step is optional, and you can run test cases even without logging in, but we highly recommend giving it a try.
To login, run:
@@ -67,9 +80,9 @@ Open `test_chatbot.py` and write your first test case using DeepEval:
```python
import pytest
+from deepeval import assert_test
from deepeval.metrics import HallucinationMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import assert_test
def test_case():
input = "What if these shoes don't fit?"
@@ -98,9 +111,61 @@ deepeval test run test_chatbot.py
-# View results on our platform
+## Evaluting a Dataset / Test Cases in Bulk
-We offer a [free web platform](https://app.confident-ai.com) for you to log and view all test results from DeepEval test runs. Our platform allows you to quickly draw insights on how your metrics are improving with each test run and to determine the optimal parameters (such as prompt templates, models, retrieval pipeline) for your specific LLM application.
+In DeepEval, a dataset is simply a collection of test cases. Here is how you can evaluate things in bulk:
+
+```python
+import pytest
+from deepeval import assert_test
+from deepeval.metrics import HallucinationMetric, AnswerRelevancyMetric
+from deepeval.test_case import LLMTestCase
+from deepeval.dataset import EvaluationDataset
+
+first_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
+second_test_case = LLMTestCase(input="...", actual_output="...", context=["..."])
+
+dataset = EvaluationDataset(test_cases=[first_test_case, second_test_case])
+
+@pytest.mark.parametrize(
+ "test_case",
+ dataset,
+)
+def test_customer_chatbot(test_case: LLMTestCase):
+ hallucination_metric = HallucinationMetric(minimum_score=0.3)
+ answer_relevancy_metric = AnswerRelevancyMetric(minimum_score=0.5)
+ assert_test(test_case, [hallucination_metric, answer_relevancy_metric])
+```
+
+```bash
+# Run this in the CLI, you can also add an optional -n flag to run tests in parallel
+deepeval test run test_.py -n 4
+```
+
+
+
+Alternatively, although we recommend using `deepeval test run`, you can evaluate a dataset/test cases without using pytest:
+
+```python
+from deepeval import evaluate
+...
+
+evaluate(dataset, [hallucination_metric])
+# or
+dataset.evaluate([hallucination_metric])
+```
+
+# View results on Confident AI
+
+We offer a [free web platform](https://app.confident-ai.com) for you to:
+
+1. Log and view all test results / metrics data from DeepEval's test runs.
+2. Debug evaluation results via LLM traces
+3. Compare and pick the optimal hyperparameteres (prompt templates, models, chunk size, etc.).
+4. Create, manage, and centralize your evaluation datasets.
+5. Track events in production and augment your evaluation dataset for continous evaluation in production.
+
+Everything on Confident AI, including how to use Confident is available [here](https://docs.confident-ai.com/docs/confident-ai-introduction).
To begin, login from the CLI:
@@ -118,7 +183,7 @@ deepeval test run test_chatbot.py
You should see a link displayed in the CLI once the test has finished running. Paste it into your browser to view the results!
-![ok](https://d2lsxfc3p6r9rv.cloudfront.net/test-summary.png)
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/confident-test-cases.png)
@@ -133,9 +198,9 @@ Please read [CONTRIBUTING.md](https://github.com/confident-ai/deepeval/blob/main
Features:
- [x] Implement G-Eval
-- [ ] Referenceless Evaluation
-- [ ] Production Evaluation & Logging
-- [ ] Evaluation Dataset Creation
+- [x] Referenceless Evaluation
+- [x] Production Evaluation & Logging
+- [x] Evaluation Dataset Creation
Integrations:
diff --git a/deepeval/__init__.py b/deepeval/__init__.py
index 8bc2f0cc2..6dddcb609 100644
--- a/deepeval/__init__.py
+++ b/deepeval/__init__.py
@@ -6,8 +6,16 @@
from ._version import __version__
from .decorators.hyperparameters import set_hyperparameters
-
-__all__ = ["set_hyperparameters"]
+from deepeval.event import track
+from deepeval.evaluate import evaluate, run_test, assert_test
+
+__all__ = [
+ "set_hyperparameters",
+ "track",
+ "evaluate",
+ "run_test",
+ "assert_test",
+]
def compare_versions(version1, version2):
diff --git a/deepeval/_version.py b/deepeval/_version.py
index 0be336dac..e9913d66a 100644
--- a/deepeval/_version.py
+++ b/deepeval/_version.py
@@ -1 +1 @@
-__version__: str = "0.20.24"
+__version__: str = "0.20.29"
diff --git a/deepeval/api.py b/deepeval/api.py
index b9c703192..ece563393 100644
--- a/deepeval/api.py
+++ b/deepeval/api.py
@@ -18,8 +18,9 @@
class Endpoints(Enum):
- CREATE_DATASET_ENDPOINT = "/v1/dataset"
- CREATE_TEST_RUN_ENDPOINT = "/v1/test-run"
+ DATASET_ENDPOINT = "/v1/dataset"
+ TEST_RUN_ENDPOINT = "/v1/test-run"
+ EVENT_ENDPOINT = "/v1/event"
class Api:
@@ -132,7 +133,6 @@ def _api_request(
data=None,
):
"""Generic HTTP request method with error handling."""
-
url = f"{self.base_api_url}/{endpoint}"
res = self._http_request(
method,
@@ -154,30 +154,19 @@ def _api_request(
except ValueError:
# Some endpoints only return 'OK' message without JSON
return json
- elif (
- res.status_code == 409
- and "task" in endpoint
- and body.get("unique_id")
- ):
- retry_history = res.raw.retries.history
- # Example RequestHistory tuple
- # RequestHistory(method='POST',
- # url='/v1/task/imageannotation',
- # error=None,
- # status=409,
- # redirect_location=None)
- if retry_history != ():
- # See if the first retry was a 500 or 503 error
- if retry_history[0][3] >= 500:
- uuid = body["unique_id"]
- newUrl = f"{self.base_api_url}/tasks?unique_id={uuid}"
- # grab task from api
- newRes = self._http_request(
- "GET", newUrl, headers=headers, auth=auth
- )
- json = newRes.json()["docs"][0]
+ elif res.status_code == 409:
+ message = res.json().get("message", "Conflict occurred.")
+
+ # Prompt user for input
+ user_input = input(f"{message} [y/N]: ").strip().lower()
+ if user_input == "y":
+ body["overwrite"] = True
+ return self._api_request(
+ method, endpoint, headers, auth, params, body, files, data
+ )
else:
- self._raise_on_response(res)
+ print("Aborted.")
+ return None
else:
self._raise_on_response(res)
return json
diff --git a/deepeval/check/__init__.py b/deepeval/check/__init__.py
new file mode 100644
index 000000000..2d1a120b5
--- /dev/null
+++ b/deepeval/check/__init__.py
@@ -0,0 +1 @@
+from .check import check
diff --git a/deepeval/check/benchmarks.py b/deepeval/check/benchmarks.py
new file mode 100644
index 000000000..6a134a821
--- /dev/null
+++ b/deepeval/check/benchmarks.py
@@ -0,0 +1,6 @@
+from enum import Enum
+
+
+class BenchmarkType(Enum):
+ HELM = "Stanford HELM"
+ LM_HARNESS = "LM Harness"
diff --git a/deepeval/check/check.py b/deepeval/check/check.py
new file mode 100644
index 000000000..61fe2ab6c
--- /dev/null
+++ b/deepeval/check/check.py
@@ -0,0 +1,21 @@
+from typing import Union
+
+from .benchmarks import BenchmarkType
+
+
+def check(benchmark: Union[str, BenchmarkType]):
+ if benchmark == BenchmarkType.HELM:
+ handleHELMCheck()
+ if benchmark == BenchmarkType.LM_HARNESS:
+ handleLMHarnessCheck()
+ else:
+ # catch all for custom benchmark checks
+ pass
+
+
+def handleHELMCheck():
+ pass
+
+
+def handleLMHarnessCheck():
+ pass
diff --git a/deepeval/dataset/api.py b/deepeval/dataset/api.py
index 4fcbaeb37..2b474948c 100644
--- a/deepeval/dataset/api.py
+++ b/deepeval/dataset/api.py
@@ -1,5 +1,5 @@
from pydantic import BaseModel, Field
-from typing import Optional, List
+from typing import Optional, List, Union
class Golden(BaseModel):
@@ -11,8 +11,13 @@ class Golden(BaseModel):
class APIDataset(BaseModel):
alias: str
+ overwrite: bool
goldens: Optional[List[Golden]] = Field(default=None)
class CreateDatasetHttpResponse(BaseModel):
link: str
+
+
+class DatasetHttpResponse(BaseModel):
+ goldens: List[Golden]
diff --git a/deepeval/dataset/dataset.py b/deepeval/dataset/dataset.py
index 2d99932fe..75e028e86 100644
--- a/deepeval/dataset/dataset.py
+++ b/deepeval/dataset/dataset.py
@@ -8,18 +8,27 @@
from deepeval.metrics import BaseMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import evaluate
from deepeval.api import Api, Endpoints
-from deepeval.dataset.utils import convert_test_cases_to_goldens
-from deepeval.dataset.api import APIDataset, CreateDatasetHttpResponse
+from deepeval.dataset.utils import (
+ convert_test_cases_to_goldens,
+ convert_goldens_to_test_cases,
+)
+from deepeval.dataset.api import (
+ APIDataset,
+ CreateDatasetHttpResponse,
+ Golden,
+ DatasetHttpResponse,
+)
@dataclass
class EvaluationDataset:
test_cases: List[LLMTestCase]
+ goldens: List[Golden]
def __init__(self, test_cases: List[LLMTestCase] = []):
self.test_cases = test_cases
+ self.goldens = []
def add_test_case(self, test_case: LLMTestCase):
self.test_cases.append(test_case)
@@ -28,7 +37,7 @@ def __iter__(self):
return iter(self.test_cases)
def evaluate(self, metrics: List[BaseMetric]):
- from deepeval.evaluator import evaluate
+ from deepeval import evaluate
return evaluate(self.test_cases, metrics)
@@ -234,29 +243,45 @@ def push(self, alias: str):
)
if os.path.exists(".deepeval"):
goldens = convert_test_cases_to_goldens(self.test_cases)
- body = APIDataset(alias=alias, goldens=goldens).model_dump(
- by_alias=True, exclude_none=True
- )
+ body = APIDataset(
+ alias=alias, overwrite=False, goldens=goldens
+ ).model_dump(by_alias=True, exclude_none=True)
api = Api()
result = api.post_request(
- endpoint=Endpoints.CREATE_DATASET_ENDPOINT.value,
+ endpoint=Endpoints.DATASET_ENDPOINT.value,
body=body,
)
- response = CreateDatasetHttpResponse(
- link=result["link"],
- )
- link = response.link
- console = Console()
- console.print(
- "✅ Dataset pushed to Confidnet AI! View on "
- f"[link={link}]{link}[/link]"
- )
- # webbrowser.open(link)
+ if result:
+ response = CreateDatasetHttpResponse(
+ link=result["link"],
+ )
+ link = response.link
+ console = Console()
+ console.print(
+ "✅ Dataset successfully pushed to Confidnet AI! View at "
+ f"[link={link}]{link}[/link]"
+ )
+ webbrowser.open(link)
else:
raise Exception(
"To push dataset to Confident AI, run `deepeval login`"
)
- # TODO
def pull(self, alias: str):
- pass
+ if os.path.exists(".deepeval"):
+ api = Api()
+ result = api.get_request(
+ endpoint=Endpoints.DATASET_ENDPOINT.value,
+ params={"alias": alias},
+ )
+ response = DatasetHttpResponse(
+ goldens=result["goldens"],
+ )
+ self.goldens.extend(response.goldens)
+
+ # TODO: make this conversion at evaluation time instead
+ self.test_cases.extend(convert_goldens_to_test_cases(self.goldens))
+ else:
+ raise Exception(
+ "Run `deepeval login` to pull dataset from Confident AI"
+ )
diff --git a/deepeval/dataset/utils.py b/deepeval/dataset/utils.py
index 9f59e3103..8ebf2e294 100644
--- a/deepeval/dataset/utils.py
+++ b/deepeval/dataset/utils.py
@@ -17,3 +17,16 @@ def convert_test_cases_to_goldens(
}
goldens.append(Golden(**golden))
return goldens
+
+
+def convert_goldens_to_test_cases(goldens: List[Golden]) -> List[LLMTestCase]:
+ test_cases = []
+ for golden in goldens:
+ test_case = LLMTestCase(
+ input=golden.input,
+ actual_output=golden.actual_output,
+ expected_output=golden.expected_output,
+ context=golden.context,
+ )
+ test_cases.append(test_case)
+ return test_cases
diff --git a/deepeval/evaluator.py b/deepeval/evaluate.py
similarity index 96%
rename from deepeval/evaluator.py
rename to deepeval/evaluate.py
index 11c3a1056..5f798edc7 100644
--- a/deepeval/evaluator.py
+++ b/deepeval/evaluate.py
@@ -130,6 +130,9 @@ def print_test_result(test_result: TestResult):
print(
f" - ✅ {metric.__name__} (score: {metric.score}, minimum_score: {metric.minimum_score})"
)
+ if metric.score_metadata:
+ for metric_name, score in metric.score_metadata.items():
+ print(f" - {metric_name} (score: {score})")
print("\nFor test case:\n")
print(f" - input: {test_result.input}")
diff --git a/deepeval/event.py b/deepeval/event.py
new file mode 100644
index 000000000..709f03c09
--- /dev/null
+++ b/deepeval/event.py
@@ -0,0 +1,57 @@
+from typing import Optional, List, Dict
+from deepeval.api import Api, Endpoints
+from pydantic import BaseModel, Field
+
+
+class APIEvent(BaseModel):
+ name: str = Field(..., alias="name")
+ model: str
+ input: str
+ output: str
+ retrieval_context: Optional[List[str]] = Field(
+ None, alias="retrievalContext"
+ )
+ completion_time: Optional[float] = Field(None, alias="completionTime")
+ token_usage: Optional[float] = Field(None, alias="tokenUsage")
+ token_cost: Optional[float] = Field(None, alias="tokenCost")
+ distinct_id: Optional[str] = Field(None, alias="distinctId")
+ conversation_id: Optional[str] = Field(None, alias="conversationId")
+ additional_data: Optional[Dict] = Field(None, alias="additionalData")
+
+
+def track(
+ event_name: str,
+ model: str,
+ input: str,
+ output: str,
+ retrieval_context: Optional[List[str]] = None,
+ completion_time: Optional[float] = None,
+ token_usage: Optional[float] = None,
+ token_cost: Optional[float] = None,
+ distinct_id: Optional[str] = None,
+ conversation_id: Optional[str] = None,
+ additional_data: Optional[Dict] = None,
+ fail_silently: Optional[bool] = True,
+):
+ event = APIEvent(
+ name=event_name,
+ model=model,
+ input=input,
+ output=output,
+ retrievalContext=retrieval_context,
+ completionTime=completion_time,
+ tokenUsage=token_usage,
+ tokenCost=token_cost,
+ distinctId=distinct_id,
+ conversationId=conversation_id,
+ additionalData=additional_data,
+ )
+ api = Api()
+ try:
+ _ = api.post_request(
+ endpoint=Endpoints.EVENT_ENDPOINT.value,
+ body=event.dict(by_alias=True, exclude_none=True),
+ )
+ except Exception as e:
+ if not fail_silently:
+ raise (e)
diff --git a/deepeval/metrics/answer_relevancy.py b/deepeval/metrics/answer_relevancy.py
index 43bc2c9a6..5176d759a 100644
--- a/deepeval/metrics/answer_relevancy.py
+++ b/deepeval/metrics/answer_relevancy.py
@@ -1,47 +1,14 @@
from deepeval.singleton import Singleton
from deepeval.test_case import LLMTestCase
from deepeval.metrics import BaseMetric
-import numpy as np
-
-
-def softmax(x):
- e_x = np.exp(x - np.max(x))
- return e_x / e_x.sum(axis=0)
-
-
-class AnswerRelevancyModel(metaclass=Singleton):
- def __init__(self):
- from sentence_transformers import SentenceTransformer
-
- # Load the model
- self.model = SentenceTransformer(
- "sentence-transformers/multi-qa-MiniLM-L6-cos-v1"
- )
-
- def encode(self, text):
- return self.model.encode(text)
-
-
-class CrossEncoderAnswerRelevancyModel(metaclass=Singleton):
- def __init__(self, model_name: str = "cross-encoder/nli-deberta-v3-base"):
- from sentence_transformers.cross_encoder import CrossEncoder
-
- self.model = CrossEncoder(model_name)
-
- def encode(self, question: str, answer: str):
- scores = self.model.predict([[question, answer]])
- return softmax(scores[0])[2]
+from deepeval.scorer import Scorer
class AnswerRelevancyMetric(BaseMetric, metaclass=Singleton):
def __init__(
self, minimum_score: float = 0.5, model_type: str = "cross_encoder"
):
- self.minimum_score = minimum_score
- if model_type == "cross_encoder":
- self.model = CrossEncoderAnswerRelevancyModel()
- else:
- self.model = AnswerRelevancyModel()
+ self.minimum_score, self.model_type = minimum_score, model_type
def __call__(self, test_case: LLMTestCase):
score = self.measure(test_case.input, test_case.actual_output)
@@ -49,26 +16,15 @@ def __call__(self, test_case: LLMTestCase):
return score
def measure(self, test_case: LLMTestCase) -> float:
- from sentence_transformers import util
-
- if test_case.input is None or test_case.actual_output is None:
- raise ValueError("query and output cannot be None")
-
- if isinstance(self.model, CrossEncoderAnswerRelevancyModel):
- score = self.model.encode(test_case.input, test_case.actual_output)
- else:
- docs = [test_case.actual_output]
- # Encode query and documents
- query_emb = self.model.encode(test_case.input)
- doc_emb = self.model.encode(docs)
- # Compute dot score between query and all document embeddings
- scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
- score = scores[0]
+ answer_relevancy_score = Scorer.answer_relevancy_score(
+ predictions=test_case.input,
+ target=test_case.actual_output,
+ model_type=self.model_type,
+ )
- self.success = score > self.minimum_score
- # Log answer relevancy
- self.score = score
- return score
+ self.success = answer_relevancy_score > self.minimum_score
+ self.score = answer_relevancy_score
+ return answer_relevancy_score
def is_successful(self) -> bool:
return self.success
diff --git a/deepeval/metrics/base_metric.py b/deepeval/metrics/base_metric.py
index 8299e2d35..ce2cdb866 100644
--- a/deepeval/metrics/base_metric.py
+++ b/deepeval/metrics/base_metric.py
@@ -1,12 +1,13 @@
from abc import abstractmethod
from deepeval.test_case import LLMTestCase
-from typing import Optional
+from typing import Optional, Dict
class BaseMetric:
# set an arbitrary minimum score that will get over-ridden later
score: float = 0
+ score_metadata: Dict = None
reason: Optional[str] = None
@property
diff --git a/deepeval/metrics/factual_consistency.py b/deepeval/metrics/factual_consistency.py
index a4a6ef6e5..854c4826b 100644
--- a/deepeval/metrics/factual_consistency.py
+++ b/deepeval/metrics/factual_consistency.py
@@ -1,26 +1,8 @@
-import os
from deepeval.singleton import Singleton
from deepeval.test_case import LLMTestCase
-from deepeval.utils import chunk_text, softmax
from deepeval.metrics.base_metric import BaseMetric
-from deepeval.progress_context import progress_context
-from sentence_transformers import CrossEncoder
-
-
-class FactualConsistencyModel(metaclass=Singleton):
- def __init__(self, model_name: str = "cross-encoder/nli-deberta-v3-large"):
- # We use a smple cross encoder model
- os.environ["TOKENIZERS_PARALLELISM"] = "false"
- self.model = CrossEncoder(model_name)
-
- def predict(self, text_a: str, text_b: str):
- scores = self.model.predict([(text_a, text_b), (text_b, text_a)])
- # https://huggingface.co/cross-encoder/nli-deberta-base
- # label_mapping = ["contradiction", "entailment", "neutral"]
- softmax_scores = softmax(scores)
- score = softmax_scores[0][1]
- second_score = softmax_scores[1][1]
- return max(score, second_score)
+from deepeval.utils import chunk_text
+from deepeval.scorer import Scorer
class FactualConsistencyMetric(BaseMetric, metaclass=Singleton):
@@ -29,12 +11,7 @@ def __init__(
minimum_score: float = 0.6,
model_name: str = "cross-encoder/nli-deberta-v3-large",
):
- # For Crossencoder model, move to singleton to avoid re-instantiating
-
- with progress_context(
- "Downloading FactualConsistencyModel (may take up to 2 minutes if running for the first time)..."
- ):
- self.model = FactualConsistencyModel(model_name)
+ self.model_name = model_name
self.minimum_score = minimum_score
def measure(self, test_case: LLMTestCase):
@@ -50,15 +27,14 @@ def measure(self, test_case: LLMTestCase):
else:
raise ValueError("Context must be a string or a list of strings")
- max_score = 0
- for c in context_list:
- score = self.model.predict(c, test_case.actual_output)
- if score > max_score:
- max_score = score
-
- self.success = max_score > self.minimum_score
- self.score = max_score
- return max_score
+ score = Scorer.factual_consistency_score(
+ contexts=context_list,
+ prediction=test_case.actual_output,
+ model=self.model_name,
+ )
+ self.score = score
+ self.success = score > self.minimum_score
+ return score
def is_successful(self) -> bool:
return self.success
diff --git a/deepeval/metrics/non_toxic_metric.py b/deepeval/metrics/non_toxic_metric.py
index 48a8eab61..96318bef2 100644
--- a/deepeval/metrics/non_toxic_metric.py
+++ b/deepeval/metrics/non_toxic_metric.py
@@ -3,24 +3,9 @@
0 - Toxic
"""
from typing import List
-
-from deepeval.singleton import Singleton
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
from deepeval.metrics.base_metric import BaseMetric
-
-
-class DetoxifyModel(metaclass=Singleton):
- def __init__(self, model_name: str = "original"):
- self.model_name = model_name
-
- try:
- from detoxify import Detoxify
- except ImportError as e:
- print(e)
- self.model = Detoxify(model_name)
-
- def predict(self, text: str):
- return self.model.predict(text)
+from deepeval.scorer import Scorer
class NonToxicMetric(BaseMetric):
@@ -34,8 +19,7 @@ def __init__(
raise ValueError("evaluation_params cannot be empty or None")
self.evaluation_params = evaluation_params
- self.detoxify_model = DetoxifyModel(model_name)
- self.minimum_score = minimum_score
+ self.minimum_score, self.model_name = minimum_score, model_name
def __call__(self, test_case: LLMTestCase):
score = self.measure(test_case.actual_output)
@@ -57,7 +41,9 @@ def measure(self, test_case: LLMTestCase):
for param in self.evaluation_params:
text_to_evaluate = getattr(test_case, param.value)
- results = self.detoxify_model.predict(text_to_evaluate)
+ _, results = Scorer.neural_toxic_score(
+ prediction=text_to_evaluate, model=self.model_name
+ )
# sample output
# {'toxicity': 0.98057544,
# 'severe_toxicity': 0.106649496,
diff --git a/deepeval/metrics/ragas_metric.py b/deepeval/metrics/ragas_metric.py
index 50f7b84eb..c7b7f852a 100644
--- a/deepeval/metrics/ragas_metric.py
+++ b/deepeval/metrics/ragas_metric.py
@@ -2,7 +2,7 @@
"""
from deepeval.metrics import BaseMetric
from deepeval.test_case import LLMTestCase
-from typing import List
+import warnings
class ContextualPrecisionMetric(BaseMetric):
@@ -495,7 +495,7 @@ def measure(self, test_case: LLMTestCase):
# Create a dataset from the test case
# Convert the LLMTestCase to a format compatible with Dataset
- scores = []
+ score_metadata = {}
metrics = [
ContextualPrecisionMetric(),
ContextualRelevancyMetric(),
@@ -503,20 +503,30 @@ def measure(self, test_case: LLMTestCase):
FaithfulnessMetric(),
AnswerRelevancyMetric(),
]
+
+ warnings_list = []
+
for metric in metrics:
score = metric.measure(test_case)
- scores.append(score)
+ score_metadata[metric.__name__] = score
+ if score == 0:
+ warnings_list.append(
+ f"The RAGAS score will be 0 since {metric.__name__} has a score of 0"
+ )
- # ragas score is harmonic mean of all the scores
- if len(scores) > 0:
- ragas_score = len(scores) / sum(
- 1.0 / score for score in scores if score != 0
- )
- else:
+ for warning in warnings_list:
+ print(warning)
+
+ if any(score == 0 for score in score_metadata.values()):
ragas_score = 0
+ else:
+ ragas_score = len(score_metadata) / sum(
+ 1.0 / score for score in score_metadata.values()
+ )
self.success = ragas_score >= self.minimum_score
self.score = ragas_score
+ self.score_metadata = score_metadata
return self.score
def is_successful(self):
diff --git a/deepeval/metrics/unbias_metric.py b/deepeval/metrics/unbias_metric.py
index 2945a05e8..945d07c67 100644
--- a/deepeval/metrics/unbias_metric.py
+++ b/deepeval/metrics/unbias_metric.py
@@ -4,11 +4,10 @@
0 - Bias
"""
-import warnings
from typing import Optional, List
-
from deepeval.metrics import BaseMetric
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
+from deepeval.scorer import Scorer
class UnBiasedMetric(BaseMetric):
@@ -41,19 +40,15 @@ def measure(self, test_case: LLMTestCase, return_all_scores: bool = False):
f"Test case is missing the required attribute: {param.value}"
)
- from Dbias.bias_classification import classifier
-
- warnings.warn(
- "Run `pip install deepeval[bias]`. If you have, please ignore this warning."
- )
-
total_score = 0 # to accumulate scores for all evaluation params
all_results = (
[]
) # to accumulate all individual results if return_all_scores is True
for param in self.evaluation_params:
- result = classifier(getattr(test_case, param.value))
+ result = Scorer.neural_bias_score(
+ getattr(test_case, param.value), model=self.model_name
+ )
if return_all_scores:
all_results.append(result)
diff --git a/deepeval/models/__init__.py b/deepeval/models/__init__.py
index e69de29bb..c29c53c14 100644
--- a/deepeval/models/__init__.py
+++ b/deepeval/models/__init__.py
@@ -0,0 +1,9 @@
+from deepeval.models.base import DeepEvalBaseModel
+from deepeval.models.answer_relevancy_model import (
+ AnswerRelevancyModel,
+ CrossEncoderAnswerRelevancyModel,
+)
+from deepeval.models.summac_model import SummaCModels
+from deepeval.models.factual_consistency_model import FactualConsistencyModel
+from deepeval.models.detoxify_model import DetoxifyModel
+from deepeval.models.unbias_model import UnBiasedModel
diff --git a/deepeval/models/_summac_model.py b/deepeval/models/_summac_model.py
new file mode 100644
index 000000000..6541f103f
--- /dev/null
+++ b/deepeval/models/_summac_model.py
@@ -0,0 +1,574 @@
+# mypy: check_untyped_defs = False
+###############################################
+# Source: https://github.com/tingofurro/summac
+###############################################
+
+from transformers import AutoTokenizer, AutoModelForSequenceClassification
+import nltk
+import numpy as np
+import torch
+import os
+import json
+from deepeval import utils as utils_misc
+
+
+model_map = {
+ "snli-base": {
+ "model_card": "boychaboy/SNLI_roberta-base",
+ "entailment_idx": 0,
+ "contradiction_idx": 2,
+ },
+ "snli-large": {
+ "model_card": "boychaboy/SNLI_roberta-large",
+ "entailment_idx": 0,
+ "contradiction_idx": 2,
+ },
+ "mnli-base": {
+ "model_card": "microsoft/deberta-base-mnli",
+ "entailment_idx": 2,
+ "contradiction_idx": 0,
+ },
+ "mnli": {
+ "model_card": "roberta-large-mnli",
+ "entailment_idx": 2,
+ "contradiction_idx": 0,
+ },
+ "anli": {
+ "model_card": "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli",
+ "entailment_idx": 0,
+ "contradiction_idx": 2,
+ },
+ "vitc-base": {
+ "model_card": "tals/albert-base-vitaminc-mnli",
+ "entailment_idx": 0,
+ "contradiction_idx": 1,
+ },
+ "vitc": {
+ "model_card": "tals/albert-xlarge-vitaminc-mnli",
+ "entailment_idx": 0,
+ "contradiction_idx": 1,
+ },
+ "vitc-only": {
+ "model_card": "tals/albert-xlarge-vitaminc",
+ "entailment_idx": 0,
+ "contradiction_idx": 1,
+ },
+}
+
+
+def card_to_name(card):
+ card2name = {v["model_card"]: k for k, v in model_map.items()}
+ if card in card2name:
+ return card2name[card]
+ return card
+
+
+def name_to_card(name):
+ if name in model_map:
+ return model_map[name]["model_card"]
+ return name
+
+
+def get_neutral_idx(ent_idx, con_idx):
+ return list(set([0, 1, 2]) - set([ent_idx, con_idx]))[0]
+
+
+class _SummaCImager:
+ def __init__(
+ self,
+ model_name="mnli",
+ granularity="paragraph",
+ use_cache=True,
+ max_doc_sents=100,
+ device="cuda",
+ **kwargs
+ ):
+ self.grans = granularity.split("-")
+
+ assert (
+ all(
+ gran in ["paragraph", "sentence", "document", "2sents", "mixed"]
+ for gran in self.grans
+ )
+ and len(self.grans) <= 2
+ ), "Unrecognized `granularity` %s" % (granularity)
+ assert (
+ model_name in model_map.keys()
+ ), "Unrecognized model name: `%s`" % (model_name)
+
+ self.model_name = model_name
+ if model_name != "decomp":
+ self.model_card = name_to_card(model_name)
+ self.entailment_idx = model_map[model_name]["entailment_idx"]
+ self.contradiction_idx = model_map[model_name]["contradiction_idx"]
+ self.neutral_idx = get_neutral_idx(
+ self.entailment_idx, self.contradiction_idx
+ )
+
+ self.granularity = granularity
+ self.use_cache = use_cache
+ self.cache_folder = "/export/share/plaban/summac_cache/"
+
+ self.max_doc_sents = max_doc_sents
+ self.max_input_length = 500
+ self.device = device
+ self.cache = {}
+ self.model = None # Lazy loader
+
+ def load_nli(self):
+ if self.model_name == "decomp":
+ try:
+ from allennlp.predictors.predictor import Predictor
+ except ModuleNotFoundError:
+ print(
+ "allennlp library is not installed. "
+ "Please install the library by following the instruction from their documentation:"
+ "https://docs.allennlp.org/main/"
+ )
+ self.model = Predictor.from_path(
+ "https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz",
+ cuda_device=0,
+ )
+
+ else:
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_card)
+ self.model = AutoModelForSequenceClassification.from_pretrained(
+ self.model_card
+ ).eval()
+ self.model.to(self.device)
+
+ def split_sentences(self, text):
+ sentences = nltk.tokenize.sent_tokenize(text)
+ sentences = [sent for sent in sentences if len(sent) > 10]
+ return sentences
+
+ def split_2sents(self, text):
+ sentences = nltk.tokenize.sent_tokenize(text)
+ sentences = [sent for sent in sentences if len(sent) > 10]
+ two_sents = [
+ " ".join(sentences[i : (i + 2)]) for i in range(len(sentences))
+ ]
+ return two_sents
+
+ def split_paragraphs(self, text):
+ if text.count("\n\n") > 0:
+ paragraphs = [p.strip() for p in text.split("\n\n")]
+ else:
+ paragraphs = [p.strip() for p in text.split("\n")]
+ return [p for p in paragraphs if len(p) > 10]
+
+ def split_text(self, text, granularity="sentence"):
+ if granularity == "document":
+ return [text]
+ elif granularity == "paragraph":
+ return self.split_paragraphs(text)
+ elif granularity == "sentence":
+ return self.split_sentences(text)
+ elif granularity == "2sents":
+ return self.split_2sents(text)
+ elif granularity == "mixed":
+ return self.split_sentences(text) + self.split_paragraphs(text)
+
+ def build_image(self, original, generated):
+ cache_key = (original, generated)
+ if self.use_cache and cache_key in self.cache:
+ cached_image = self.cache[cache_key]
+ cached_image = cached_image[:, : self.max_doc_sents, :]
+ return cached_image
+
+ if len(self.grans) == 1:
+ gran_doc, gran_sum = self.grans[0], self.grans[0]
+ else:
+ gran_doc, gran_sum = self.grans[0], self.grans[1]
+
+ original_chunks = self.split_text(original, granularity=gran_doc)[
+ : self.max_doc_sents
+ ]
+ generated_chunks = self.split_text(generated, granularity=gran_sum)
+
+ N_ori = len(original_chunks)
+ N_gen = len(generated_chunks)
+
+ if N_ori == 0 or N_gen == 0:
+ return np.zeros((3, 1, 1))
+ # assert (N_ori > 0 and N_gen > 0), "One of the inputs has no chunks"
+
+ image = np.zeros((3, N_ori, N_gen))
+
+ if self.model is None:
+ self.load_nli()
+
+ dataset = [
+ {
+ "premise": original_chunks[i],
+ "hypothesis": generated_chunks[j],
+ "doc_i": i,
+ "gen_i": j,
+ }
+ for i in range(N_ori)
+ for j in range(N_gen)
+ ]
+ for batch in utils_misc.batcher(dataset, batch_size=20):
+ if self.model_name == "decomp":
+ batch_evids, batch_conts, batch_neuts = [], [], []
+ batch_json = [
+ {"premise": d["premise"], "hypothesis": d["hypothesis"]}
+ for d in batch
+ ]
+ model_outs = self.model.predict_batch_json(batch_json)
+ for out in model_outs:
+ probs = out["label_probs"]
+ batch_evids.append(probs[0])
+ batch_conts.append(probs[1])
+ batch_neuts.append(probs[2])
+
+ else:
+ batch_prems = [b["premise"] for b in batch]
+ batch_hypos = [b["hypothesis"] for b in batch]
+ batch_tokens = self.tokenizer.batch_encode_plus(
+ list(zip(batch_prems, batch_hypos)),
+ padding=True,
+ truncation=True,
+ max_length=self.max_input_length,
+ return_tensors="pt",
+ truncation_strategy="only_first",
+ )
+ batch_tokens = {
+ k: v.to(self.device) for k, v in batch_tokens.items()
+ }
+ with torch.no_grad():
+ model_outputs = self.model(**batch_tokens)
+
+ batch_probs = torch.nn.functional.softmax(
+ model_outputs["logits"], dim=-1
+ )
+ batch_evids = batch_probs[:, self.entailment_idx].tolist()
+ batch_conts = batch_probs[:, self.contradiction_idx].tolist()
+ batch_neuts = batch_probs[:, self.neutral_idx].tolist()
+
+ for b, evid, cont, neut in zip(
+ batch, batch_evids, batch_conts, batch_neuts
+ ):
+ image[0, b["doc_i"], b["gen_i"]] = evid
+ image[1, b["doc_i"], b["gen_i"]] = cont
+ image[2, b["doc_i"], b["gen_i"]] = neut
+
+ if self.use_cache:
+ self.cache[cache_key] = image
+ return image
+
+ def get_cache_file(self):
+ return os.path.join(
+ self.cache_folder,
+ "cache_%s_%s.json" % (self.model_name, self.granularity),
+ )
+
+ def save_cache(self):
+ cache_cp = {"[///]".join(k): v.tolist() for k, v in self.cache.items()}
+ with open(self.get_cache_file(), "w") as f:
+ json.dump(cache_cp, f)
+
+ def load_cache(self):
+ cache_file = self.get_cache_file()
+ if os.path.isfile(cache_file):
+ with open(cache_file, "r") as f:
+ cache_cp = json.load(f)
+ self.cache = {
+ tuple(k.split("[///]")): np.array(v)
+ for k, v in cache_cp.items()
+ }
+
+
+class _SummaCConv(torch.nn.Module):
+ def __init__(
+ self,
+ models=["mnli", "anli", "vitc"],
+ bins="even50",
+ granularity="sentence",
+ nli_labels="e",
+ device="cuda",
+ start_file=None,
+ imager_load_cache=True,
+ agg="mean",
+ norm_histo=False,
+ **kwargs
+ ):
+ # `bins` should be `even%d` or `percentiles`
+ assert nli_labels in [
+ "e",
+ "c",
+ "n",
+ "ec",
+ "en",
+ "cn",
+ "ecn",
+ ], "Unrecognized nli_labels argument %s" % (nli_labels)
+
+ super(SummaCConv, self).__init__()
+ self.device = device
+ self.models = models
+
+ self.imagers = []
+ for model_name in models:
+ self.imagers.append(
+ SummaCImager(
+ model_name=model_name, granularity=granularity, **kwargs
+ )
+ )
+ if imager_load_cache:
+ for imager in self.imagers:
+ imager.load_cache()
+ assert len(self.imagers) > 0, "Imager names were empty or unrecognized"
+
+ if "even" in bins:
+ n_bins = int(bins.replace("even", ""))
+ self.bins = list(np.arange(0, 1, 1 / n_bins)) + [1.0]
+ elif bins == "percentile":
+ self.bins = [
+ 0.0,
+ 0.01,
+ 0.02,
+ 0.03,
+ 0.04,
+ 0.07,
+ 0.13,
+ 0.37,
+ 0.90,
+ 0.91,
+ 0.92,
+ 0.93,
+ 0.94,
+ 0.95,
+ 0.955,
+ 0.96,
+ 0.965,
+ 0.97,
+ 0.975,
+ 0.98,
+ 0.985,
+ 0.99,
+ 0.995,
+ 1.0,
+ ]
+
+ self.nli_labels = nli_labels
+ self.n_bins = len(self.bins) - 1
+ self.norm_histo = norm_histo
+ self.n_rows = 10
+ self.n_labels = 2
+ self.n_depth = len(self.imagers) * len(self.nli_labels)
+ self.full_size = self.n_depth * self.n_bins
+ if self.norm_histo:
+ self.full_size += (
+ 2 # Will explicitely give the count of originals and generateds
+ )
+
+ self.agg = agg
+
+ self.mlp = torch.nn.Linear(self.full_size, 1).to(device)
+ self.layer_final = torch.nn.Linear(3, self.n_labels).to(device)
+
+ if start_file is not None:
+ print(self.load_state_dict(torch.load(start_file)))
+
+ def build_image(self, original, generated):
+ images = [
+ imager.build_image(original, generated) for imager in self.imagers
+ ]
+ image = np.concatenate(images, axis=0)
+ return image
+
+ def compute_histogram(self, original=None, generated=None, image=None):
+ # Takes the two texts, and generates a (n_rows, 2*n_bins)
+
+ if image is None:
+ image = self.build_image(original, generated)
+
+ N_depth, N_ori, N_gen = image.shape
+
+ full_histogram = []
+ for i_gen in range(N_gen):
+ histos = []
+
+ for i_depth in range(N_depth):
+ if (
+ (i_depth % 3 == 0 and "e" in self.nli_labels)
+ or (i_depth % 3 == 1 and "c" in self.nli_labels)
+ or (i_depth % 3 == 2 and "n" in self.nli_labels)
+ ):
+ histo, X = np.histogram(
+ image[i_depth, :, i_gen],
+ range=(0, 1),
+ bins=self.bins,
+ density=self.norm_histo,
+ )
+ histos.append(histo)
+
+ if self.norm_histo:
+ histos = [[N_ori, N_gen]] + histos
+ histogram_row = np.concatenate(histos)
+ full_histogram.append(histogram_row)
+
+ n_rows_missing = self.n_rows - len(full_histogram)
+ full_histogram += [[0.0] * self.full_size] * n_rows_missing
+ full_histogram = full_histogram[: self.n_rows]
+ full_histogram = np.array(full_histogram)
+ return image, full_histogram
+
+ def forward(self, originals, generateds, images=None):
+ if images is not None:
+ # In case they've been pre-computed.
+ histograms = []
+ for image in images:
+ _, histogram = self.compute_histogram(image=image)
+ histograms.append(histogram)
+ else:
+ images, histograms = [], []
+ for original, generated in zip(originals, generateds):
+ image, histogram = self.compute_histogram(
+ original=original, generated=generated
+ )
+ images.append(image)
+ histograms.append(histogram)
+
+ N = len(histograms)
+ histograms = torch.FloatTensor(histograms).to(self.device)
+
+ non_zeros = (torch.sum(histograms, dim=-1) != 0.0).long()
+ seq_lengths = non_zeros.sum(dim=-1).tolist()
+
+ mlp_outs = self.mlp(histograms).reshape(N, self.n_rows)
+ features = []
+
+ for mlp_out, seq_length in zip(mlp_outs, seq_lengths):
+ if seq_length > 0:
+ Rs = mlp_out[:seq_length]
+ if self.agg == "mean":
+ features.append(
+ torch.cat(
+ [
+ torch.mean(Rs).unsqueeze(0),
+ torch.mean(Rs).unsqueeze(0),
+ torch.mean(Rs).unsqueeze(0),
+ ]
+ ).unsqueeze(0)
+ )
+ elif self.agg == "min":
+ features.append(
+ torch.cat(
+ [
+ torch.min(Rs).unsqueeze(0),
+ torch.min(Rs).unsqueeze(0),
+ torch.min(Rs).unsqueeze(0),
+ ]
+ ).unsqueeze(0)
+ )
+ elif self.agg == "max":
+ features.append(
+ torch.cat(
+ [
+ torch.max(Rs).unsqueeze(0),
+ torch.max(Rs).unsqueeze(0),
+ torch.max(Rs).unsqueeze(0),
+ ]
+ ).unsqueeze(0)
+ )
+ elif self.agg == "all":
+ features.append(
+ torch.cat(
+ [
+ torch.min(Rs).unsqueeze(0),
+ torch.mean(Rs).unsqueeze(0),
+ torch.max(Rs).unsqueeze(0),
+ ]
+ ).unsqueeze(0)
+ )
+ else:
+ features.append(
+ torch.FloatTensor([0.0, 0.0, 0.0]).unsqueeze(0)
+ ) # .cuda()
+ features = torch.cat(features)
+ logits = self.layer_final(features)
+ histograms_out = [histogram.cpu().numpy() for histogram in histograms]
+ return logits, histograms_out, images
+
+ def save_imager_cache(self):
+ for imager in self.imagers:
+ imager.save_cache()
+
+ def score(self, originals, generateds, **kwargs):
+ with torch.no_grad():
+ logits, histograms, images = self.forward(originals, generateds)
+ probs = torch.nn.functional.softmax(logits, dim=-1)
+ batch_scores = probs[:, 1].tolist()
+ return {
+ "scores": batch_scores
+ } # , "histograms": histograms, "images": images
+
+
+class _SummaCZS:
+ def __init__(
+ self,
+ model_name="mnli",
+ granularity="paragraph",
+ op1="max",
+ op2="mean",
+ use_ent=True,
+ use_con=True,
+ imager_load_cache=True,
+ device="cuda",
+ **kwargs
+ ):
+ assert op2 in ["min", "mean", "max"], "Unrecognized `op2`"
+ assert op1 in ["max", "mean", "min"], "Unrecognized `op1`"
+
+ self.imager = _SummaCImager(
+ model_name=model_name,
+ granularity=granularity,
+ device=device,
+ **kwargs
+ )
+ if imager_load_cache:
+ self.imager.load_cache()
+ self.op2 = op2
+ self.op1 = op1
+ self.use_ent = use_ent
+ self.use_con = use_con
+
+ def save_imager_cache(self):
+ self.imager.save_cache()
+
+ def score_one(self, original, generated):
+ image = self.imager.build_image(original, generated)
+
+ ent_scores = np.max(image[0], axis=0)
+ co_scores = np.max(image[1], axis=0)
+ if self.op1 == "mean":
+ ent_scores = np.mean(image[0], axis=0)
+ co_scores = np.mean(image[1], axis=0)
+ elif self.op1 == "min":
+ ent_scores = np.min(image[0], axis=0)
+ co_scores = np.min(image[1], axis=0)
+
+ if self.use_ent and self.use_con:
+ scores = ent_scores - co_scores
+ elif self.use_ent:
+ scores = ent_scores
+ elif self.use_con:
+ scores = 1.0 - co_scores
+
+ final_score = np.mean(scores)
+ if self.op2 == "min":
+ final_score = np.min(scores)
+ elif self.op2 == "max":
+ final_score = np.max(scores)
+
+ return {"score": final_score, "image": image}
+
+ def score(self, sources, generateds, **kwargs):
+ output = {"scores": [], "images": []}
+ for source, gen in zip(sources, generateds):
+ score = self.score_one(source, gen)
+ output["scores"].append(score["score"])
+ output["images"].append(score["image"])
+ return output
diff --git a/deepeval/models/answer_relevancy_model.py b/deepeval/models/answer_relevancy_model.py
new file mode 100644
index 000000000..88ee30391
--- /dev/null
+++ b/deepeval/models/answer_relevancy_model.py
@@ -0,0 +1,74 @@
+import numpy as np
+from typing import Optional
+from deepeval.models.base import DeepEvalBaseModel
+
+
+def softmax(x):
+ e_x = np.exp(x - np.max(x))
+ return e_x / e_x.sum(axis=0)
+
+
+class AnswerRelevancyModel(DeepEvalBaseModel):
+ def __init__(self, model_name: Optional[str] = None):
+ model_name = (
+ "sentence-transformers/multi-qa-MiniLM-L6-cos-v1"
+ if model_name is None
+ else model_name
+ )
+ super().__init__(model_name=model_name)
+
+ def load_model(self):
+ """Loads a model, that will be responsible for scoring.
+
+ Returns:
+ A model object
+ """
+ from sentence_transformers import SentenceTransformer
+
+ return SentenceTransformer(self.model_name)
+
+ def _call(self, text: str):
+ """Runs the model to score the predictions.
+
+ Args:
+ text (str): Text, which can be output from a LLM or a simple input text.
+
+ Returns:
+ Answer relevancy score.
+ """
+ if not hasattr(self, "model") or self.model is None:
+ self.model = self.load_model()
+ return self.model.encode(text)
+
+
+class CrossEncoderAnswerRelevancyModel(DeepEvalBaseModel):
+ def __init__(self, model_name: str | None = None):
+ model_name = (
+ "cross-encoder/nli-deberta-v3-base"
+ if model_name is None
+ else model_name
+ )
+ super().__init__(model_name)
+
+ def load_model(self):
+ """Loads a model, that will be responsible for scoring.
+
+ Returns:
+ A model object
+ """
+ from sentence_transformers.cross_encoder import CrossEncoder
+
+ return CrossEncoder(model_name=self.model_name)
+
+ def _call(self, question: str, answer: str):
+ """Runs the model to score the predictions.
+
+ Args:
+ question (str): The input text.
+ answer (str): This can be the output from an LLM or the answer from a question-answer pair.
+
+ Returns:
+ Cross Answer relevancy score of the question and the answer.
+ """
+ scores = self.model.predict([[question, answer]])
+ return softmax(scores[0])[2]
diff --git a/deepeval/models/base.py b/deepeval/models/base.py
new file mode 100644
index 000000000..4d0c8e20b
--- /dev/null
+++ b/deepeval/models/base.py
@@ -0,0 +1,29 @@
+from abc import ABC, abstractmethod
+from typing import Any, Optional
+
+
+class DeepEvalBaseModel(ABC):
+ def __init__(self, model_name: Optional[str] = None, *args, **kwargs):
+ self.model_name = model_name
+ self.model = self.load_model(*args, **kwargs)
+
+ @abstractmethod
+ def load_model(self, *args, **kwargs):
+ """Loads a model, that will be responsible for scoring.
+
+ Returns:
+ A model object
+ """
+ pass
+
+ def __call__(self, *args: Any, **kwargs: Any) -> Any:
+ return self._call(*args, **kwargs)
+
+ @abstractmethod
+ def _call(self, *args, **kwargs):
+ """Runs the model to score / ourput the model predictions.
+
+ Returns:
+ A score or a list of results.
+ """
+ pass
diff --git a/deepeval/models/detoxify_model.py b/deepeval/models/detoxify_model.py
new file mode 100644
index 000000000..00f72ea8d
--- /dev/null
+++ b/deepeval/models/detoxify_model.py
@@ -0,0 +1,26 @@
+import torch
+from deepeval.models.base import DeepEvalBaseModel
+from detoxify import Detoxify
+
+
+class DetoxifyModel(DeepEvalBaseModel):
+ def __init__(self, model_name: str | None = None, *args, **kwargs):
+ if model_name is not None:
+ assert model_name in [
+ "original",
+ "unbiased",
+ "multilingual",
+ ], "Invalid model. Available variants: original, unbiased, multilingual"
+ model_name = "original" if model_name is None else model_name
+ super().__init__(model_name, *args, **kwargs)
+
+ def load_model(self):
+ device = "cuda" if torch.cuda.is_available() else "cpu"
+ return Detoxify(self.model_name, device=device)
+
+ def _call(self, text: str):
+ toxicity_score_dict = self.model.predict(text)
+ mean_toxicity_score = sum(list(toxicity_score_dict.values())) / len(
+ toxicity_score_dict
+ )
+ return mean_toxicity_score, toxicity_score_dict
diff --git a/deepeval/models/factual_consistency_model.py b/deepeval/models/factual_consistency_model.py
new file mode 100644
index 000000000..ca5e40e0c
--- /dev/null
+++ b/deepeval/models/factual_consistency_model.py
@@ -0,0 +1,27 @@
+import os
+from deepeval.models.base import DeepEvalBaseModel
+from sentence_transformers import CrossEncoder
+from deepeval.utils import softmax
+
+
+class FactualConsistencyModel(DeepEvalBaseModel):
+ def __init__(self, model_name: str | None = None, *args, **kwargs):
+ model_name = (
+ "cross-encoder/nli-deberta-v3-large"
+ if model_name is None
+ else model_name
+ )
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
+ super().__init__(model_name, *args, **kwargs)
+
+ def load_model(self):
+ return CrossEncoder(self.model_name)
+
+ def _call(self, text_a: str, text_b: str):
+ scores = self.model.predict([(text_a, text_b), (text_b, text_a)])
+ # https://huggingface.co/cross-encoder/nli-deberta-base
+ # label_mapping = ["contradiction", "entailment", "neutral"]
+ softmax_scores = softmax(scores)
+ score = softmax_scores[0][1]
+ second_score = softmax_scores[1][1]
+ return max(score, second_score)
diff --git a/deepeval/models/hallucination_model.py b/deepeval/models/hallucination_model.py
index 5e3f48464..65c4681bf 100644
--- a/deepeval/models/hallucination_model.py
+++ b/deepeval/models/hallucination_model.py
@@ -1,18 +1,22 @@
import os
+from typing import Optional
from deepeval.singleton import Singleton
from sentence_transformers import CrossEncoder
from deepeval.progress_context import progress_context
-from deepeval.models.model_map import model_map, name_to_card
class HallucinationModel(metaclass=Singleton):
- def __init__(self, model_name: str = "vectara-hallucination"):
+ def __init__(self, model_name: Optional[str] = None):
# We use a smple cross encoder model
+ model_name = (
+ "vectara/hallucination_evaluation_model"
+ if model_name is None
+ else model_name
+ )
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# TODO: add this progress context in the correct place
with progress_context(
"Downloading HallucinationEvaluationModel (may take up to 2 minutes if running for the first time)..."
):
- model_name = name_to_card(model_name)
self.model = CrossEncoder(model_name)
diff --git a/deepeval/models/model_map.py b/deepeval/models/model_map.py
deleted file mode 100644
index 1b4058530..000000000
--- a/deepeval/models/model_map.py
+++ /dev/null
@@ -1,65 +0,0 @@
-model_map = {
- "snli-base": {
- "model_card": "boychaboy/SNLI_roberta-base",
- "entailment_idx": 0,
- "contradiction_idx": 2,
- },
- "snli-large": {
- "model_card": "boychaboy/SNLI_roberta-large",
- "entailment_idx": 0,
- "contradiction_idx": 2,
- },
- "mnli-base": {
- "model_card": "microsoft/deberta-base-mnli",
- "entailment_idx": 2,
- "contradiction_idx": 0,
- },
- "mnli": {
- "model_card": "roberta-large-mnli",
- "entailment_idx": 2,
- "contradiction_idx": 0,
- },
- "anli": {
- "model_card": "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli",
- "entailment_idx": 0,
- "contradiction_idx": 2,
- },
- "vitc-base": {
- "model_card": "tals/albert-base-vitaminc-mnli",
- "entailment_idx": 0,
- "contradiction_idx": 1,
- },
- "vitc": {
- "model_card": "tals/albert-xlarge-vitaminc-mnli",
- "entailment_idx": 0,
- "contradiction_idx": 1,
- },
- "vitc-only": {
- "model_card": "tals/albert-xlarge-vitaminc",
- "entailment_idx": 0,
- "contradiction_idx": 1,
- },
- # "decomp": 0,
- "vectara-hallucination": {
- "model_card": "vectara/hallucination_evaluation_model",
- "entailment_idx": None,
- "contradiction_idx": None,
- },
-}
-
-
-def card_to_name(card):
- card2name = {v["model_card"]: k for k, v in model_map.items()}
- if card in card2name:
- return card2name[card]
- return card
-
-
-def name_to_card(name):
- if name in model_map:
- return model_map[name]["model_card"]
- return name
-
-
-def get_neutral_idx(ent_idx, con_idx):
- return list(set([0, 1, 2]) - set([ent_idx, con_idx]))[0]
diff --git a/deepeval/models/summac_model.py b/deepeval/models/summac_model.py
index b5e887873..7df978794 100644
--- a/deepeval/models/summac_model.py
+++ b/deepeval/models/summac_model.py
@@ -1,514 +1,64 @@
-# mypy: check_untyped_defs = False
-###############################################
-# Source: https://github.com/tingofurro/summac
-###############################################
-
-from transformers import AutoTokenizer, AutoModelForSequenceClassification
-import nltk
-import numpy as np
import torch
-import os
-import json
-from deepeval import utils as utils_misc
-from deepeval.models.model_map import name_to_card, get_neutral_idx, model_map
+from typing import Union, List
+from typing import List, Union, get_origin
+from deepeval.models.base import DeepEvalBaseModel
+from deepeval.models._summac_model import _SummaCZS
-class SummaCImager:
+class SummaCModels(DeepEvalBaseModel):
def __init__(
self,
- model_name="mnli",
- granularity="paragraph",
- use_cache=True,
- max_doc_sents=100,
- device="cuda",
+ model_name: str | None = None,
+ granularity: str | None = None,
+ device: str | None = None,
+ *args,
**kwargs
):
- self.grans = granularity.split("-")
-
- assert (
- all(
- gran in ["paragraph", "sentence", "document", "2sents", "mixed"]
- for gran in self.grans
- )
- and len(self.grans) <= 2
- ), "Unrecognized `granularity` %s" % (granularity)
- assert (
- model_name in model_map.keys()
- ), "Unrecognized model name: `%s`" % (model_name)
-
- self.model_name = model_name
- if model_name != "decomp":
- self.model_card = name_to_card(model_name)
- self.entailment_idx = model_map[model_name]["entailment_idx"]
- self.contradiction_idx = model_map[model_name]["contradiction_idx"]
- self.neutral_idx = get_neutral_idx(
- self.entailment_idx, self.contradiction_idx
- )
-
- self.granularity = granularity
- self.use_cache = use_cache
- self.cache_folder = "/export/share/plaban/summac_cache/"
-
- self.max_doc_sents = max_doc_sents
- self.max_input_length = 500
- self.device = device
- self.cache = {}
- self.model = None # Lazy loader
-
- def load_nli(self):
- if self.model_name == "decomp":
- try:
- from allennlp.predictors.predictor import Predictor
- except ModuleNotFoundError:
- print(
- "allennlp library is not installed. "
- "Please install the library by following the instruction from their documentation:"
- "https://docs.allennlp.org/main/"
- )
- self.model = Predictor.from_path(
- "https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz",
- cuda_device=0,
- )
-
- else:
- self.tokenizer = AutoTokenizer.from_pretrained(self.model_card)
- self.model = AutoModelForSequenceClassification.from_pretrained(
- self.model_card
- ).eval()
- self.model.to(self.device)
-
- def split_sentences(self, text):
- sentences = nltk.tokenize.sent_tokenize(text)
- sentences = [sent for sent in sentences if len(sent) > 10]
- return sentences
-
- def split_2sents(self, text):
- sentences = nltk.tokenize.sent_tokenize(text)
- sentences = [sent for sent in sentences if len(sent) > 10]
- two_sents = [
- " ".join(sentences[i : (i + 2)]) for i in range(len(sentences))
- ]
- return two_sents
-
- def split_paragraphs(self, text):
- if text.count("\n\n") > 0:
- paragraphs = [p.strip() for p in text.split("\n\n")]
- else:
- paragraphs = [p.strip() for p in text.split("\n")]
- return [p for p in paragraphs if len(p) > 10]
-
- def split_text(self, text, granularity="sentence"):
- if granularity == "document":
- return [text]
- elif granularity == "paragraph":
- return self.split_paragraphs(text)
- elif granularity == "sentence":
- return self.split_sentences(text)
- elif granularity == "2sents":
- return self.split_2sents(text)
- elif granularity == "mixed":
- return self.split_sentences(text) + self.split_paragraphs(text)
-
- def build_image(self, original, generated):
- cache_key = (original, generated)
- if self.use_cache and cache_key in self.cache:
- cached_image = self.cache[cache_key]
- cached_image = cached_image[:, : self.max_doc_sents, :]
- return cached_image
-
- if len(self.grans) == 1:
- gran_doc, gran_sum = self.grans[0], self.grans[0]
- else:
- gran_doc, gran_sum = self.grans[0], self.grans[1]
-
- original_chunks = self.split_text(original, granularity=gran_doc)[
- : self.max_doc_sents
- ]
- generated_chunks = self.split_text(generated, granularity=gran_sum)
-
- N_ori = len(original_chunks)
- N_gen = len(generated_chunks)
-
- if N_ori == 0 or N_gen == 0:
- return np.zeros((3, 1, 1))
- # assert (N_ori > 0 and N_gen > 0), "One of the inputs has no chunks"
-
- image = np.zeros((3, N_ori, N_gen))
-
- if self.model is None:
- self.load_nli()
-
- dataset = [
- {
- "premise": original_chunks[i],
- "hypothesis": generated_chunks[j],
- "doc_i": i,
- "gen_i": j,
- }
- for i in range(N_ori)
- for j in range(N_gen)
- ]
- for batch in utils_misc.batcher(dataset, batch_size=20):
- if self.model_name == "decomp":
- batch_evids, batch_conts, batch_neuts = [], [], []
- batch_json = [
- {"premise": d["premise"], "hypothesis": d["hypothesis"]}
- for d in batch
- ]
- model_outs = self.model.predict_batch_json(batch_json)
- for out in model_outs:
- probs = out["label_probs"]
- batch_evids.append(probs[0])
- batch_conts.append(probs[1])
- batch_neuts.append(probs[2])
-
- else:
- batch_prems = [b["premise"] for b in batch]
- batch_hypos = [b["hypothesis"] for b in batch]
- batch_tokens = self.tokenizer.batch_encode_plus(
- list(zip(batch_prems, batch_hypos)),
- padding=True,
- truncation=True,
- max_length=self.max_input_length,
- return_tensors="pt",
- truncation_strategy="only_first",
- )
- batch_tokens = {
- k: v.to(self.device) for k, v in batch_tokens.items()
- }
- with torch.no_grad():
- model_outputs = self.model(**batch_tokens)
-
- batch_probs = torch.nn.functional.softmax(
- model_outputs["logits"], dim=-1
- )
- batch_evids = batch_probs[:, self.entailment_idx].tolist()
- batch_conts = batch_probs[:, self.contradiction_idx].tolist()
- batch_neuts = batch_probs[:, self.neutral_idx].tolist()
-
- for b, evid, cont, neut in zip(
- batch, batch_evids, batch_conts, batch_neuts
- ):
- image[0, b["doc_i"], b["gen_i"]] = evid
- image[1, b["doc_i"], b["gen_i"]] = cont
- image[2, b["doc_i"], b["gen_i"]] = neut
-
- if self.use_cache:
- self.cache[cache_key] = image
- return image
-
- def get_cache_file(self):
- return os.path.join(
- self.cache_folder,
- "cache_%s_%s.json" % (self.model_name, self.granularity),
+ model_name = "vitc" if model_name is None else model_name
+ self.granularity = "sentence" if granularity is None else granularity
+ self.device = (
+ device
+ if device is not None
+ else "cuda"
+ if torch.cuda.is_available()
+ else "cpu"
)
+ super().__init__(model_name, *args, **kwargs)
- def save_cache(self):
- cache_cp = {"[///]".join(k): v.tolist() for k, v in self.cache.items()}
- with open(self.get_cache_file(), "w") as f:
- json.dump(cache_cp, f)
-
- def load_cache(self):
- cache_file = self.get_cache_file()
- if os.path.isfile(cache_file):
- with open(cache_file, "r") as f:
- cache_cp = json.load(f)
- self.cache = {
- tuple(k.split("[///]")): np.array(v)
- for k, v in cache_cp.items()
- }
-
-
-class SummaCConv(torch.nn.Module):
- def __init__(
- self,
- models=["mnli", "anli", "vitc"],
- bins="even50",
- granularity="sentence",
- nli_labels="e",
- device="cuda",
- start_file=None,
- imager_load_cache=True,
- agg="mean",
- norm_histo=False,
- **kwargs
- ):
- # `bins` should be `even%d` or `percentiles`
- assert nli_labels in [
- "e",
- "c",
- "n",
- "ec",
- "en",
- "cn",
- "ecn",
- ], "Unrecognized nli_labels argument %s" % (nli_labels)
-
- super(SummaCConv, self).__init__()
- self.device = device
- self.models = models
-
- self.imagers = []
- for model_name in models:
- self.imagers.append(
- SummaCImager(
- model_name=model_name, granularity=granularity, **kwargs
- )
- )
- if imager_load_cache:
- for imager in self.imagers:
- imager.load_cache()
- assert len(self.imagers) > 0, "Imager names were empty or unrecognized"
-
- if "even" in bins:
- n_bins = int(bins.replace("even", ""))
- self.bins = list(np.arange(0, 1, 1 / n_bins)) + [1.0]
- elif bins == "percentile":
- self.bins = [
- 0.0,
- 0.01,
- 0.02,
- 0.03,
- 0.04,
- 0.07,
- 0.13,
- 0.37,
- 0.90,
- 0.91,
- 0.92,
- 0.93,
- 0.94,
- 0.95,
- 0.955,
- 0.96,
- 0.965,
- 0.97,
- 0.975,
- 0.98,
- 0.985,
- 0.99,
- 0.995,
- 1.0,
- ]
-
- self.nli_labels = nli_labels
- self.n_bins = len(self.bins) - 1
- self.norm_histo = norm_histo
- self.n_rows = 10
- self.n_labels = 2
- self.n_depth = len(self.imagers) * len(self.nli_labels)
- self.full_size = self.n_depth * self.n_bins
- if self.norm_histo:
- self.full_size += (
- 2 # Will explicitely give the count of originals and generateds
- )
-
- self.agg = agg
-
- self.mlp = torch.nn.Linear(self.full_size, 1).to(device)
- self.layer_final = torch.nn.Linear(3, self.n_labels).to(device)
-
- if start_file is not None:
- print(self.load_state_dict(torch.load(start_file)))
-
- def build_image(self, original, generated):
- images = [
- imager.build_image(original, generated) for imager in self.imagers
- ]
- image = np.concatenate(images, axis=0)
- return image
-
- def compute_histogram(self, original=None, generated=None, image=None):
- # Takes the two texts, and generates a (n_rows, 2*n_bins)
-
- if image is None:
- image = self.build_image(original, generated)
-
- N_depth, N_ori, N_gen = image.shape
-
- full_histogram = []
- for i_gen in range(N_gen):
- histos = []
-
- for i_depth in range(N_depth):
- if (
- (i_depth % 3 == 0 and "e" in self.nli_labels)
- or (i_depth % 3 == 1 and "c" in self.nli_labels)
- or (i_depth % 3 == 2 and "n" in self.nli_labels)
- ):
- histo, X = np.histogram(
- image[i_depth, :, i_gen],
- range=(0, 1),
- bins=self.bins,
- density=self.norm_histo,
- )
- histos.append(histo)
-
- if self.norm_histo:
- histos = [[N_ori, N_gen]] + histos
- histogram_row = np.concatenate(histos)
- full_histogram.append(histogram_row)
-
- n_rows_missing = self.n_rows - len(full_histogram)
- full_histogram += [[0.0] * self.full_size] * n_rows_missing
- full_histogram = full_histogram[: self.n_rows]
- full_histogram = np.array(full_histogram)
- return image, full_histogram
-
- def forward(self, originals, generateds, images=None):
- if images is not None:
- # In case they've been pre-computed.
- histograms = []
- for image in images:
- _, histogram = self.compute_histogram(image=image)
- histograms.append(histogram)
- else:
- images, histograms = [], []
- for original, generated in zip(originals, generateds):
- image, histogram = self.compute_histogram(
- original=original, generated=generated
- )
- images.append(image)
- histograms.append(histogram)
-
- N = len(histograms)
- histograms = torch.FloatTensor(histograms).to(self.device)
-
- non_zeros = (torch.sum(histograms, dim=-1) != 0.0).long()
- seq_lengths = non_zeros.sum(dim=-1).tolist()
-
- mlp_outs = self.mlp(histograms).reshape(N, self.n_rows)
- features = []
-
- for mlp_out, seq_length in zip(mlp_outs, seq_lengths):
- if seq_length > 0:
- Rs = mlp_out[:seq_length]
- if self.agg == "mean":
- features.append(
- torch.cat(
- [
- torch.mean(Rs).unsqueeze(0),
- torch.mean(Rs).unsqueeze(0),
- torch.mean(Rs).unsqueeze(0),
- ]
- ).unsqueeze(0)
- )
- elif self.agg == "min":
- features.append(
- torch.cat(
- [
- torch.min(Rs).unsqueeze(0),
- torch.min(Rs).unsqueeze(0),
- torch.min(Rs).unsqueeze(0),
- ]
- ).unsqueeze(0)
- )
- elif self.agg == "max":
- features.append(
- torch.cat(
- [
- torch.max(Rs).unsqueeze(0),
- torch.max(Rs).unsqueeze(0),
- torch.max(Rs).unsqueeze(0),
- ]
- ).unsqueeze(0)
- )
- elif self.agg == "all":
- features.append(
- torch.cat(
- [
- torch.min(Rs).unsqueeze(0),
- torch.mean(Rs).unsqueeze(0),
- torch.max(Rs).unsqueeze(0),
- ]
- ).unsqueeze(0)
- )
- else:
- features.append(
- torch.FloatTensor([0.0, 0.0, 0.0]).unsqueeze(0)
- ) # .cuda()
- features = torch.cat(features)
- logits = self.layer_final(features)
- histograms_out = [histogram.cpu().numpy() for histogram in histograms]
- return logits, histograms_out, images
-
- def save_imager_cache(self):
- for imager in self.imagers:
- imager.save_cache()
-
- def score(self, originals, generateds, **kwargs):
- with torch.no_grad():
- logits, histograms, images = self.forward(originals, generateds)
- probs = torch.nn.functional.softmax(logits, dim=-1)
- batch_scores = probs[:, 1].tolist()
- return {
- "scores": batch_scores
- } # , "histograms": histograms, "images": images
-
-
-class SummaCZS:
- def __init__(
+ def load_model(
self,
- model_name="mnli",
- granularity="paragraph",
- op1="max",
- op2="mean",
- use_ent=True,
- use_con=True,
- imager_load_cache=True,
- device="cuda",
+ op1: str | None = "max",
+ op2: str | None = "mean",
+ use_ent: bool | None = True,
+ use_con: bool | None = True,
+ image_load_cache: bool | None = True,
**kwargs
):
- assert op2 in ["min", "mean", "max"], "Unrecognized `op2`"
- assert op1 in ["max", "mean", "min"], "Unrecognized `op1`"
-
- self.imager = SummaCImager(
- model_name=model_name,
- granularity=granularity,
- device=device,
+ return _SummaCZS(
+ model_name=self.model_name,
+ granularity=self.granularity,
+ device=self.device,
+ op1=op1,
+ op2=op2,
+ use_con=use_con,
+ use_ent=use_ent,
+ imager_load_cache=image_load_cache,
**kwargs
)
- if imager_load_cache:
- self.imager.load_cache()
- self.op2 = op2
- self.op1 = op1
- self.use_ent = use_ent
- self.use_con = use_con
-
- def save_imager_cache(self):
- self.imager.save_cache()
-
- def score_one(self, original, generated):
- image = self.imager.build_image(original, generated)
-
- ent_scores = np.max(image[0], axis=0)
- co_scores = np.max(image[1], axis=0)
- if self.op1 == "mean":
- ent_scores = np.mean(image[0], axis=0)
- co_scores = np.mean(image[1], axis=0)
- elif self.op1 == "min":
- ent_scores = np.min(image[0], axis=0)
- co_scores = np.min(image[1], axis=0)
-
- if self.use_ent and self.use_con:
- scores = ent_scores - co_scores
- elif self.use_ent:
- scores = ent_scores
- elif self.use_con:
- scores = 1.0 - co_scores
- final_score = np.mean(scores)
- if self.op2 == "min":
- final_score = np.min(scores)
- elif self.op2 == "max":
- final_score = np.max(scores)
-
- return {"score": final_score, "image": image}
-
- def score(self, sources, generateds, **kwargs):
- output = {"scores": [], "images": []}
- for source, gen in zip(sources, generateds):
- score = self.score_one(source, gen)
- output["scores"].append(score["score"])
- output["images"].append(score["image"])
- return output
+ def _call(
+ self, predictions: Union[str, List[str]], targets: Union[str, List[str]]
+ ) -> Union[float, dict]:
+ list_type = List[str]
+
+ if (
+ get_origin(predictions) is list_type
+ and get_origin(targets) is list_type
+ ):
+ return self.model.score(targets, predictions)
+ elif isinstance(predictions, str) and isinstance(targets, str):
+ return self.model.score_one(targets, predictions)
+ else:
+ raise TypeError(
+ "Either both predictions and targets should be List or both should be string"
+ )
diff --git a/deepeval/models/unbias_model.py b/deepeval/models/unbias_model.py
new file mode 100644
index 000000000..5104b214f
--- /dev/null
+++ b/deepeval/models/unbias_model.py
@@ -0,0 +1,18 @@
+from typing import Optional
+from deepeval.models.base import DeepEvalBaseModel
+
+
+class UnBiasedModel(DeepEvalBaseModel):
+ def __init__(self, model_name: str | None = None, *args, **kwargs):
+ model_name = "original" if model_name is None else model_name
+ super().__init__(model_name, *args, **kwargs)
+
+ def load_model(self):
+ try:
+ from Dbias.bias_classification import classifier
+ except ImportError as e:
+ print("Run `pip install deepeval[bias]`")
+ return classifier
+
+ def _call(self, text):
+ return self.model(text)
diff --git a/deepeval/scorer/scorer.py b/deepeval/scorer/scorer.py
index 6e76056a8..f7a19f643 100644
--- a/deepeval/scorer/scorer.py
+++ b/deepeval/scorer/scorer.py
@@ -4,7 +4,6 @@
from nltk.translate.bleu_score import sentence_bleu
from typing import Union, List, Optional, Any
from deepeval.utils import normalize_text
-from deepeval.models.summac_model import SummaCZS
# TODO: More scores are to be added
@@ -175,7 +174,12 @@ def bert_score(
@classmethod
def faithfulness_score(
- cls, target: str, prediction: str, model: Optional[str] = None
+ cls,
+ target: str,
+ prediction: str,
+ model: Optional[str] = None,
+ granularity: Optional[str] = None,
+ device: Optional[str] = None,
) -> float:
"""Calculate the faithfulness score of a prediction compared to a target text using SummaCZS.
@@ -189,16 +193,18 @@ def faithfulness_score(
Returns:
float: The computed faithfulness score. Higher values indicate greater faithfulness to the target text.
+
+ Right now we are using score_one method under the hood. Instead of scoring multiple predictions for faithfullness.
"""
- model = "vitc" if model is None else model
- device = "cuda" if torch.cuda.is_available() else "cpu"
- scorer = SummaCZS(
- granularity="sentence",
- model_name=model,
- imager_load_cache=False,
- device=device,
+ try:
+ from deepeval.models import SummaCModels
+ except Exception as e:
+ print(f"SummaCZS model can not be loaded.\n{e}")
+
+ scorer = SummaCModels(
+ model_name=model, granularity=granularity, device=device
)
- return scorer.score_one(target, prediction)["score"]
+ return scorer(target, prediction)["score"]
@classmethod
def hallucination_score(
@@ -221,11 +227,10 @@ def hallucination_score(
HallucinationModel,
)
except ImportError as e:
- print(e)
- model = "vectara-hallucination" if model is None else model
-
+ print(
+ f"Vectera Hallucination detection model can not be loaded.\n{e}"
+ )
scorer = HallucinationModel(model_name=model)
-
return scorer.model.predict([source, prediction])
@classmethod
@@ -236,7 +241,7 @@ def PII_score(
@classmethod
def neural_toxic_score(
- cls, prediction: str, model: Optional[Any] = None
+ cls, prediction: str, model: Optional[str] = None
) -> Union[float, dict]:
"""
Calculate the toxicity score of a given text prediction using the Detoxify model.
@@ -267,22 +272,97 @@ def neural_toxic_score(
If the model is 'multilingual', we get a dict same as the unbiasd one.
"""
try:
- from detoxify import Detoxify
+ from deepeval.models import DetoxifyModel
except ImportError as e:
- print(e)
+ print(f"Unable to import.\n {e}")
+ scorer = DetoxifyModel(model_name=model)
+ return scorer(prediction)
- device = "cuda" if torch.cuda.is_available() else "cpu"
- if model is not None:
- assert model in [
- "original",
- "unbiased",
- "multilingual",
- ], "Invalid model. Available variants: original, unbiased, multilingual"
- detoxify_model = Detoxify(model, device=device)
+ @classmethod
+ def answer_relevancy_score(
+ cls,
+ predictions: Union[str, List[str]],
+ target: str,
+ model_type: Optional[str] = None,
+ model_name: Optional[str] = None,
+ ) -> float:
+ """Calculates the Answer relevancy score.
+
+ Args:
+ predictions (Union[str, List[str]]): The predictions from the model.
+ target (str): The target on which we need to check relevancy.
+ model_name (str): The type of the answer relevancy model. This can be either an self_encoder or a cross_encoder. By default it is cross_encoder.
+ model_name (Optional[str], optional): The name of the model. Defaults to None.
+
+ Returns:
+ float: Answer relevancy score.
+ """
+ from sentence_transformers import util
+
+ try:
+ from deepeval.models import (
+ AnswerRelevancyModel,
+ CrossEncoderAnswerRelevancyModel,
+ )
+ except Exception as e:
+ print(f"Unable to load AnswerRelevancyModel model.\n{e}")
+
+ if model_type is not None:
+ assert model_type in [
+ "self_encoder",
+ "cross_encoder",
+ ], "model_type can be either 'self_encoder' or 'cross_encoder'"
+
+ model_type = "cross_encoder" if model_type is None else model_type
+
+ if model_type == "cross_encoder":
+ assert isinstance(
+ predictions, str
+ ), "When model_type is 'cross_encoder', you can compare with one prediction and one target."
+ answer_relevancy_model = CrossEncoderAnswerRelevancyModel(
+ model_name=model_name
+ )
+ score = answer_relevancy_model(predictions, target)
else:
- detoxify_model = Detoxify("original", device=device)
- toxicity_score_dict = detoxify_model.predict(prediction)
- mean_toxicity_score = sum(list(toxicity_score_dict.values())) / len(
- toxicity_score_dict
- )
- return mean_toxicity_score, toxicity_score_dict
+ answer_relevancy_model = AnswerRelevancyModel(model_name=model_name)
+ docs = (
+ [predictions] if isinstance(predictions, str) else predictions
+ )
+ query_embedding = answer_relevancy_model(target)
+ document_embedding = answer_relevancy_model(docs)
+ scores = (
+ util.dot_score(query_embedding, document_embedding)[0]
+ .cpu()
+ .tolist()
+ )
+ score = scores[0]
+ return score
+
+ @classmethod
+ def factual_consistency_score(
+ cls,
+ contexts: Union[List[str], str],
+ prediction: str,
+ model: Optional[str] = None,
+ ) -> float:
+ try:
+ from deepeval.models import FactualConsistencyModel
+ except Exception as e:
+ print(f"Unable to load FactualConsistencyModel\n{e}")
+
+ scorer = FactualConsistencyModel(model)
+ contexts = [contexts] if isinstance(contexts, str) else contexts
+ max_score = 0
+ for context in contexts:
+ score = scorer.predict(context, prediction)
+ max_score = max(max_score, score)
+ return max_score
+
+ @classmethod
+ def neural_bias_score(cls, text: str, model: Optional[str] = None) -> float:
+ try:
+ from deepeval.models import UnBiasedModel
+ except Exception as e:
+ print(f"Unable to load UnBiasedModel.\n{e}")
+ scorer = UnBiasedModel(model_name=model)
+ return scorer(text)
diff --git a/deepeval/test_run.py b/deepeval/test_run.py
index 1e466c6db..85d3454b7 100644
--- a/deepeval/test_run.py
+++ b/deepeval/test_run.py
@@ -4,7 +4,6 @@
from typing import Any, Optional, List, Dict
from deepeval.metrics import BaseMetric
from deepeval.test_case import LLMTestCase
-from collections import defaultdict
from deepeval.tracing import get_trace_stack
from deepeval.constants import PYTEST_RUN_TEST_NAME
from deepeval.decorators.hyperparameters import get_hyperparameters
@@ -281,7 +280,7 @@ def post_test_run(self, test_run: TestRun):
body = test_run.dict(by_alias=True, exclude_none=True)
api = Api()
result = api.post_request(
- endpoint=Endpoints.CREATE_TEST_RUN_ENDPOINT.value,
+ endpoint=Endpoints.TEST_RUN_ENDPOINT.value,
body=body,
)
response = TestRunHttpResponse(
diff --git a/deepeval/tracing/__init__.py b/deepeval/tracing/__init__.py
new file mode 100644
index 000000000..76ea3893c
--- /dev/null
+++ b/deepeval/tracing/__init__.py
@@ -0,0 +1 @@
+from .tracing import trace, TraceType, get_trace_stack
diff --git a/deepeval/tracing.py b/deepeval/tracing/tracing.py
similarity index 100%
rename from deepeval/tracing.py
rename to deepeval/tracing/tracing.py
diff --git a/docs/assets/3-step-metrics.png b/docs/assets/3-step-metrics.png
deleted file mode 100644
index 6a2c246e0..000000000
Binary files a/docs/assets/3-step-metrics.png and /dev/null differ
diff --git a/docs/assets/bulk-review.png b/docs/assets/bulk-review.png
deleted file mode 100644
index 42ea21add..000000000
Binary files a/docs/assets/bulk-review.png and /dev/null differ
diff --git a/docs/assets/deepeval-cli-reveal.png b/docs/assets/deepeval-cli-reveal.png
deleted file mode 100644
index dc871a3a5..000000000
Binary files a/docs/assets/deepeval-cli-reveal.png and /dev/null differ
diff --git a/docs/assets/llm-evaluation-framework-example.png b/docs/assets/llm-evaluation-framework-example.png
deleted file mode 100644
index b45e3463e..000000000
Binary files a/docs/assets/llm-evaluation-framework-example.png and /dev/null differ
diff --git a/docs/assets/llm-evaluation-framework.png b/docs/assets/llm-evaluation-framework.png
deleted file mode 100644
index cd638390f..000000000
Binary files a/docs/assets/llm-evaluation-framework.png and /dev/null differ
diff --git a/docs/assets/synthetic-query-generation.png b/docs/assets/synthetic-query-generation.png
deleted file mode 100644
index 091b36710..000000000
Binary files a/docs/assets/synthetic-query-generation.png and /dev/null differ
diff --git a/docs/docs/confident-ai-analyze-evaluations.mdx b/docs/docs/confident-ai-analyze-evaluations.mdx
new file mode 100644
index 000000000..f8f2f6af1
--- /dev/null
+++ b/docs/docs/confident-ai-analyze-evaluations.mdx
@@ -0,0 +1,57 @@
+---
+id: confident-ai-analyze-evaluations
+title: Analyzing Evals
+sidebar_label: Analyzing Evals
+---
+
+## Quick Summary
+
+Confident AI keeps track of your evaluation histories in both development and deployment and allows you to:
+
+- visualize evaluation results
+- compare and select optimal hyperparameters (eg. prompt templates, model used, etc.) for each test run
+
+## Visualize Evaluation Results
+
+Once logged in via `deepeval login`, all evaluations executed using `deepeval test run`, `evaluate(dataset, metrics)`, or `dataset.evaluate(metrics)`, will automatically have their results available on Confident.
+
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/confident-test-cases.png)
+
+## Compare Hyperparameters
+
+Begin by associating hyperparameters with each test run:
+
+```python title=test_example.py
+import deepeval
+from deepeval import assert_test
+from deepeval.metrics import HallucinationMetric
+
+def test_hallucination():
+ metric = HallucinationMetric()
+ test_case = LLMTestCase(...)
+ assert_test(test_case, [metric])
+
+
+# Although the values in this example are hardcoded,
+# you should ideally pass in variables as values to keep things dynamic
+@deepeval.set_hyperparameters
+def hyperparameters():
+ return {
+ "chunk_size": 500,
+ "temperature": 0,
+ "model": "GPT-4",
+ "prompt_template": """You are a helpful assistant, answer the following question in a non-judgemental tone.
+
+ Question:
+ {question}
+ """,
+ }
+```
+
+:::note
+This only works if you're running evaluations using `deepeval test run`. If you're not already using `deepeval test run` for evaluations, we highly recommend you to start using it.
+:::
+
+That's all! All test runs will now log hyperparameters for you to compare and optimize on.
+
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/compare-hyperparameters.png)
diff --git a/docs/docs/evaluation-tracing.mdx b/docs/docs/confident-ai-debug-evaluations.mdx
similarity index 85%
rename from docs/docs/evaluation-tracing.mdx
rename to docs/docs/confident-ai-debug-evaluations.mdx
index c41839602..c4aa15364 100644
--- a/docs/docs/evaluation-tracing.mdx
+++ b/docs/docs/confident-ai-debug-evaluations.mdx
@@ -1,12 +1,14 @@
---
-id: evaluation-tracing
-title: Tracing
-sidebar_label: Tracing
+id: confident-ai-debug-evaluations
+title: Debugging Evals
+sidebar_label: Debugging Evals
---
## Quick Summary
-Tracing in the context of evaluating LLM applications provides a quick and easy way for you to identify why certain test cases are failing on specific metrics. From chunking to embedding, retrieval to generation, tracing allows you to debug your LLM application pipeline at a component level.
+Confident AI uses "tracing" to help debug unsatisfactory evaluation results. Tracing in the context of evaluating LLM applications provides a quick and easy way for you to identify why certain test cases are failing on specific metrics.
+
+From chunking to embedding, retrieval to generation, tracing allows you to debug your LLM application pipeline at a component level.
![ok](https://d2lsxfc3p6r9rv.cloudfront.net/tracing.png)
@@ -124,7 +126,11 @@ class Chatbot:
Applying the `@trace` decorator will automatically log LLM traces each time `chatbot.query()` is called during `deepeval test run`. This will allow you to debug failing test cases by inspecting individual trace stacks on Confident AI.
-## Log Your First Trace
+:::note
+Currently, tracing only works if you're running evaluations using `deepeval test run`, and generating `actual_output`s from your LLM application at evaluation time (ie. tracing does not work with pre-computed outputs)
+:::
+
+## Debugging with Traces
Continuning from the previous code snippet where you've defined your `Chatbot` class, paste in the following test case to evaluate whether your LLM application is outputting factually correct answers.
@@ -132,9 +138,9 @@ Continuning from the previous code snippet where you've defined your `Chatbot` c
...
import pytest
+from deepeval import assert_test
from deepeval.test_case import LLMTestCase
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import assert_test
chatbot = Chatbot()
@@ -155,19 +161,13 @@ def test_hallucination():
assert_test(test_case, [metric])
```
-[Login to Confident AI](https://app.confident-ai.com/login) to start tracing your LLM application for each test case.
-
-```
-deepeval login
-```
-
-Follow the instructions displayed on the CLI to create an account, get your Confident API key, and paste it in the CLI. Once you're logged in, run `deepeval test run`:
+Lastly, run `deepeval test run`:
```
deepeval test run test_chatbot.py
```
-You should see the test case has failed, but that' ok because it's meant to fail. Paste the link returned from the CLI into the same browser you logged in with to view and debug why your test case failed.
+You can now go to Confident AI to debug failing test cases with tracing.
![ok](https://d2lsxfc3p6r9rv.cloudfront.net/confident-tracing.gif)
diff --git a/docs/docs/confident-ai-evaluate-datasets.mdx b/docs/docs/confident-ai-evaluate-datasets.mdx
new file mode 100644
index 000000000..8489b969b
--- /dev/null
+++ b/docs/docs/confident-ai-evaluate-datasets.mdx
@@ -0,0 +1,79 @@
+---
+id: confident-ai-evaluate-datasets
+title: Evaluating Datasets
+sidebar_label: Evaluating Datasets
+---
+
+## Quick Summary
+
+You can pull evaluation datasets from Confident AI and run evaluations using `deepeval` as described in the [datasets seciton.](evaluation-datasets)
+
+## Pull Your Dataset From Confident AI
+
+Pull datasets from Confident by specifying its `alias`:
+
+```python
+from deepeval.dataset import EvaluationDataset
+
+# Initialize empty dataset object
+dataset = EvaluationDataset()
+
+# Pull from Confident
+dataset.pull(alias="My Confident Dataset")
+```
+
+## Evaluate Your Dataset
+
+You can start running evaluations as usual once you have your dataset pulled from Confident AI. Remember, a dataset is simply a list of test cases, so what you previously learned on [evaluating test cases](evaluation-test-cases#assert-test-cases) still applies.
+
+:::note
+The term "evaluations" and "test run" means the same and is often used interchangebly throughout this documentation.
+:::
+
+### With Pytest (highly recommended)
+
+```python title="test_example.py"
+from deepeval import assert_test
+from deepeval.metrics import HallucinationMetric
+from deepeval.dataset import EvaluationDataset
+from deepeval.test_case import LLMTestCase
+
+# Initialize empty dataset object
+dataset = EvaluationDataset()
+
+# Pull from Confident
+dataset.pull(alias="My Confident Dataset")
+
+@pytest.mark.parametrize(
+ "test_case",
+ dataset,
+)
+def test_customer_chatbot(test_case: LLMTestCase):
+ hallucination_metric = HallucinationMetric(minimum_score=0.3)
+ assert_test(test_case, [hallucination_metric])
+```
+
+Don't forget to run `deepeval test run` in the CLI:
+
+```console
+deepeval test run test_example.py
+```
+
+### Without Pytest
+
+```python
+from deepeval import evaluate
+from deepeval.metrics import HallucinationMetric
+from deepeval.dataset import EvaluationDataset
+
+hallucination_metric = HallucinationMetric(minimum_score=0.3)
+
+# Initialize empty dataset object and pull from Confident
+dataset = EvaluationDataset()
+dataset.pull(alias="My Confident Dataset")
+
+dataset.evaluate([hallucination_metric])
+
+# You can also call the evaluate() function directly
+evaluate(dataset, [hallucination_metric, answer_relevancy_metric])
+```
diff --git a/docs/docs/confident-ai-introduction.mdx b/docs/docs/confident-ai-introduction.mdx
new file mode 100644
index 000000000..7cc3a0bcb
--- /dev/null
+++ b/docs/docs/confident-ai-introduction.mdx
@@ -0,0 +1,41 @@
+---
+id: confident-ai-introduction
+title: Introduction
+sidebar_label: Introduction
+---
+
+## Quick Summary
+
+Confident AI was designed for teams to bring LLM evaluations from development to production. It is an all-in-one platform that unlocks `deepeval`'s full potential by allowing you to:
+
+- evaluate LLM applications continously in production
+- centralize and standardize evaluation datasets on the cloud
+- trace and debug LLM applications during evaluation
+- keep track of the evaluation history of your LLM application
+- generate evaluation-based summary reports for relevant stakeholders
+
+## Continuous Evaluation
+
+Continuous evaluation refers to the process of evaluating LLM applications in not just development, but also in production and throughout the lifetime of your LLM application. Here's a quick diagram outlining how Confident AI enables this process:
+
+
+
+
+
+Everything in `deepeval` is already automatically integrated with Confident AI, including `deepeval`'s [custom metrics](evaluation-metrics#custom-metrics). To start using Confident AI with `deepeval`, simply login in the CLI:
+
+```
+deepeval login
+```
+
+Follow the instructions displayed on the CLI (to create an account, get your Confident API key, paste it in the CLI), and you're good to go.
diff --git a/docs/docs/confident-ai-manage-datasets.mdx b/docs/docs/confident-ai-manage-datasets.mdx
new file mode 100644
index 000000000..bea4f0d78
--- /dev/null
+++ b/docs/docs/confident-ai-manage-datasets.mdx
@@ -0,0 +1,97 @@
+---
+id: confident-ai-manage-datasets
+title: Managing Datasets
+sidebar_label: Managing Datasets
+---
+
+## Quick Summary
+
+Confident AI provides your team a centralized place to create and edit evaluation datasets. You can manage evaluation datasets either using `deepeval` or directly on Confident AI.
+
+## Create Your Dataset Using DeepEval
+
+Creating an `EvaluationDataset` on Confident using `deepeval` is a two-step process:
+
+1. Create a dataset locally (same as how you would create a dataset as shown in the [datasets section](evaluation-datasets))
+2. Push the created dataset to Confident
+
+### Create A Dataset Locally
+
+```python
+from deepeval.test_case import LLMTestCase
+from deepeval.dataset import EvaluationDataset
+
+original_dataset = [
+ {
+ "input": "What are your operating hours?",
+ "actual_output": "...",
+ "context": [
+ "Our company operates from 10 AM to 6 PM, Monday to Friday.",
+ "We are closed on weekends and public holidays.",
+ "Our customer service is available 24/7.",
+ ],
+ },
+ {
+ "input": "Do you offer free shipping?",
+ "actual_output": "...",
+ "expected_output": "Yes, we offer free shipping on orders over $50.",
+ },
+ {
+ "input": "What is your return policy?",
+ "actual_output": "...",
+ },
+]
+
+test_cases = []
+for datapoint in original_dataset:
+ input = datapoint.get("input", None)
+ actual_output = datapoint.get("actual_output", None)
+ expected_output = datapoint.get("expected_output", None)
+ context = datapoint.get("context", None)
+
+ test_case = LLMTestCase(
+ input=input,
+ actual_output=actual_output,
+ expected_output=expected_output,
+ context=context
+ )
+ test_cases.append(test_case)
+
+dataset = EvaluationDataset(test_cases=test_cases)
+```
+
+### Push Dataset to Confident AI
+
+After creating your `EvaluationDataset`, all you have to do is push it to Confident by providing an `alias` as an unique identifier:
+
+```python
+# Provide an alias when pushing a dataset
+dataset.push(alias="My Confident Dataset")
+```
+
+:::danger
+Pushing a dataset to Confident overwrites existing datasets with the same `alias`
+:::
+
+## Create Your Dataset on Confident AI
+
+You can alternatively create an evaluation dataset directly on Confident.
+
+1. [Login to Confident.](https://app.confident-ai.com)
+2. Find "Datasets" on the left navigation bar to go to the "Datasets" page.
+3. Click on the "Create Dataset" button.
+4. Enter an "alias", and click "Create". Click on the newly created dataset to create your first golden.
+5. In the "Edit Dataset" page, click on the "Actions" button.
+6. On the dropdown menu, click on the "Create Golden" button.
+7. Create your "Golden", click "Save", and repeat steps 5-7 until you're done building your dataset.
+
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/edit-dataset.png)
+
+## What is a Golden?
+
+A "Golden" is what makes up an evaluation dataset and is very similar to a test case in `deepeval`, but they:
+
+- do not require an `actual_output`, so whilst test cases are always ready for evaluation, a golden isn't.
+- only exists within an `EvaluationDataset()`, while test cases can be defined anywhere.
+
+We introduced the concept of goldens because it allows you to create evaluation datasets on Confident without needing to pre-computed `actual_output`s. This is especially helpful if you are looking to generate responses from your LLM application at evaluation time.
diff --git a/docs/docs/confident-ai-track-events.mdx b/docs/docs/confident-ai-track-events.mdx
new file mode 100644
index 000000000..b985cb42d
--- /dev/null
+++ b/docs/docs/confident-ai-track-events.mdx
@@ -0,0 +1,55 @@
+---
+id: confident-ai-track-events
+title: Tracking Production Events
+sidebar_label: Tracking Production Events
+---
+
+## Quick Summary
+
+`deepeval` allows you to tracking events in production to identify unsatisfactory weaknesses in your LLM application in a real world setting. By tracking events, you'll be able to improve your evaluation dataset over time on Confident.
+
+## Setup Tracking
+
+Simply add `deepeval.track(...)` in your application to start tracking events. The `track()` function takes in the following arguments:
+
+- `event_name`: type `str` specifying the event tracked
+- `model`: type `str` specifying the name of the LLM model used
+- `input`: type `str`
+- `output`: type `str`
+- [Optional] `distinct_id`: type `str` to identify different users using your LLM application
+- [Optional] `conversation_id`: type `str` to group together multiple messages under a single conversation thread
+- [Optional] `completion_time`: type `float` that indicates how many **seconds** it took your LLM application to complete
+- [Optional] `retrieval_context`: type `list[str]` that indicates the context that were retrieved in your RAG pipeline
+- [Optional] `token_usage`: type `float`
+- [Optional] `token_cost`: type `float`
+- [Optional] `additional_data`: type `dict`
+- [Optional] `fail_silently`: type `bool`, defaults to True
+
+```python
+import deepeval
+
+...
+
+# At the end of your LLM call
+deepeval.track(
+ event_name="Chatbot",
+ model="gpt-4",
+ input="input",
+ output="output",
+ distinct_id="a user Id",
+ conversation_id="a conversation thread Id",
+ retrieval_context=["..."]
+ completion_time=8.23,
+ token_usage=134,
+ token_cost=0.23,
+ additional_data={"example": "example"},
+ fail_silently=True
+)
+
+```
+
+## View Events on Confident AI
+
+Lastly, go to Confident's observatory to view events and identify ones where you want to augment your evaluation dataset with.
+
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/observatory.png)
diff --git a/docs/docs/evaluation-datasets.mdx b/docs/docs/evaluation-datasets.mdx
index 07eae2dfa..61b037a44 100644
--- a/docs/docs/evaluation-datasets.mdx
+++ b/docs/docs/evaluation-datasets.mdx
@@ -59,6 +59,67 @@ for datapoint in original_dataset:
dataset.add_test_case(test_case)
```
+## Load an Existing Dataset
+
+`deepeval` offers support for loading datasetes stored in JSON files, CSV files, and hugging face datasets into an `EvaluationDataset` as test cases.
+
+### From JSON
+
+You can add test cases into your `EvaluationDataset` by supplying a `file_path` to your `.json` file. Your `.json` file should contain an array of objects (or list of dictionaries).
+
+```python
+from deepeval.dataset import EvaluationDataset
+
+dataset = EvaluationDataset()
+dataset.add_test_cases_from_json_file(
+ # file_path is the absolute path to you .json file
+ file_path="example.json",
+ input_key_name="query",
+ actual_output_key_name="actual_output",
+ expected_output_key_name="expected_output",
+ context_key_name="context",
+)
+```
+
+### From CSV
+
+You can add test cases into your `EvaluationDataset` by supplying a `file_path` to your `.csv` file. Your `.csv` file should contain rows that can be mapped into `LLMTestCase`s through their column names. Remember, `context` should be a list of strings and in the context of CSV files, it means you have to supply a `context_col_delimiter` argument to tell `deepeval` how to split your context cells into a list of strings.
+
+```python
+from deepeval.dataset import EvaluationDataset
+
+dataset = EvaluationDataset()
+dataset.add_test_cases_from_csv_file(
+ # file_path is the absolute path to you .csv file
+ file_path="example.csv",
+ input_col_name="query",
+ actual_output_col_name="actual_output",
+ expected_output_col_name="expected_output",
+ context_col_name="context",
+ context_col_delimiter= ";"
+)
+```
+
+### From Hugging Face
+
+```python
+from deepeval.dataset import EvaluationDataset
+
+dataset = EvaluationDataset()
+dataset.add_test_cases_from_hf_dataset(
+ dataset_name="Example HF dataset name",
+ input_field_name="query",
+ actual_output_field_name="actual_output",
+ expected_output_field_name="expected_output",
+ context_field_name="context",
+ split="train",
+)
+```
+
+:::note
+Since `expected_output` and `context` are optional parameters for an `LLMTestCase`, expected output and context fields are similarily **optional** parameters when adding test cases from an existing dataset.
+:::
+
## Evaluate Your Dataset With Pytest
Before we begin, we highly recommend [logging into Confident AI](https://app.confident-ai.com) to keep track of all evaluation results on the cloud:
@@ -70,7 +131,7 @@ deepeval login
`deepeval` utilizes the `@pytest.mark.parametrize` decorator to loop through entire datasets.
```python title="test_bulk.py"
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
from deepeval.test_case import LLMTestCase
from deepeval.metrics import HallucinationMetric, AnswerRelevancyMetric
from deepeval.dataset import EvaluationDataset
@@ -98,7 +159,7 @@ deepeval test run test_bulk.py -n 3
Alternately, you can use deepeval's `evaluate` function to evaluate datasets. This approach avoids the CLI, but does not allow for parallel test execution.
```python
-from deepeval.evaluator import evaluate
+from deepeval import evaluate
from deepeval.metrics import HallucinationMetric, AnswerRelevancyMetric
from deepeval.dataset import EvaluationDataset
diff --git a/docs/docs/evaluation-metrics.mdx b/docs/docs/evaluation-metrics.mdx
index 096010794..a77f58343 100644
--- a/docs/docs/evaluation-metrics.mdx
+++ b/docs/docs/evaluation-metrics.mdx
@@ -37,9 +37,9 @@ Hallucination determines whether your LLM application outputs factually correct
```python
import pytest
+from deepeval import evaluate
from deepeval.metrics import HallucinationMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import run_test
# Replace this with the actual documents that you are passing as input to your LLM.
context=["A man with blond-hair, and a brown shirt drinking out of a public water fountain."]
@@ -49,7 +49,12 @@ actual_output="A blond drinking water in public.",
test_case = LLMTestCase(input="placeholder", actual_output=actual_output, context=context)
metric = HallucinationMetric(minimum_score=0.5)
-run_test(test_case, [metric])
+
+metric.measure(test_case)
+print(metric.score)
+
+# or
+# evaluate([test_case], [metric])
```
:::info
@@ -94,9 +99,9 @@ Answer Relevancy measures how relevant the `actual_output` of your LLM applicati
```python
import pytest
+from deepeval import evaluate
from deepeval.metrics import AnswerRelevancyMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import run_test
input = "What if these shoes don't fit?"
@@ -107,26 +112,33 @@ actual_output = "We offer a 30-day full refund at no extra cost."
answer_relevancy_metric = AnswerRelevancyMetric(minimum_score=0.7)
test_case = LLMTestCase(input=input, actual_output=actual_output)
-run_test(test_case, [answer_relevancy_metric])
+metric.measure(test_case)
+print(metric.score)
+
+# or
+# evaluate([test_case], [metric])
```
## RAGAS
-`deepeval` offers the RAGAS metric, which is useful for evaluating RAG pipelines (ie. LLM applications built with RAG). The RAGAS score is calculated by taking an unweighted harmonic mean of four distinct metrics.
+`deepeval` offers the RAGAS metric, which is useful for evaluating RAG pipelines (ie. LLM applications built with RAG). The RAGAS score is calculated by taking an unweighted harmonic mean of five distinct metrics.
-1. **Faithfulness Metric**: measures hallucination to ensure output align with context. Calculated using the `actual_output` and `retrieval_context`.
+1. **Faithfulness Metric**: measures hallucination to ensure output align with context. Calculated using `actual_output` and `retrieval_context`.
-2. **Answer Relevancy Metric**: measures how relevant an answer is relative to the question. Penalizes redundancy or incompleteness. Derived from the `input` and `actual_output`.
+2. **Contextual Precision Metric**: determines whether more relevant retrieved contexts are ranked higher than less relevant ones. Calculated using `input` and `retrieval_context`.
-3. **Contextual Relevancy Metric**: assesses the relevance of retrieved contexts to input. Penalizes redundant information. Based on the `input` and `retrieval_context`.
+3. **Answer Relevancy Metric**: measures how relevant an answer is relative to the question. Penalizes redundancy or incompleteness. Derived from the `input` and `actual_output`.
-4. **Context Recall Metric**: gauges the recall of the retrieved context using the annotated answer as a reference. Based on the `expected_output` and `retrieval_context`.
+4. **Contextual Relevancy Metric**: assesses the relevance of retrieved contexts to input. Penalizes redundant information. Based on the `input` and `retrieval_context`.
-The Faithfulness and Answer Relevancy metric assess the quality of the generator in your RAG pipeline, while the Contextual Relevancy and Recall metric evaluate the performance of your retriever.
+5. **Context Recall Metric**: gauges the recall of the retrieved context using the annotated answer as a reference. Based on the `expected_output` and `retrieval_context`.
+
+The Faithfulness and Answer Relevancy metric assess the quality of the generator in your RAG pipeline, while the Contextual Relevancy, Precision, and Recall metric evaluate the performance of your retriever.
Create an `LLMTestCase` and supply all parameters to calculate the RAGAS score:
```python
+from deepeval import evaluate
from deepeval.metrics import RagasMetric
from deepeval.test_case import LLMTestCase
@@ -147,9 +159,20 @@ test_case = LLMTestCase(
retrieval_context=retrieval_context,
)
-run_test(test_case, [ragas_metric])
+ragas_metric.measure(test_case)
+print(ragas_metric.score)
+
+# You can also print out the 5 scores that make up the RAGAS score.
+print(ragas_metric.score_metadata)
+
+# or
+# evaluate([test_case], [ragas_metric])
```
+:::info
+Since the RAGAS score is the harmonic mean of 5 different scores, a zero value for either one of the scores will yield a final score of 0 for the RAGAS metric.
+:::
+
As mentioned earlier, the RAGAS score is the harmonic mean of five different metrics. You can however import these metrics individually and utilize them in exactly the same way as all other metrics offered by `deepeval`.
```python
@@ -175,8 +198,8 @@ pip install detoxify
Being a referenceless metric means `NonToxicMetric` requires an extra parameter named `evaluation_params`. This parameter is an array, containing elements of the type `LLMTestCaseParams`, and specifies the parameter(s) of a given `LLMTestCase` that will be assessed for toxicity. The `NonToxicMetric` will then compute a score based on the average toxicity levels of each individual component being evaluated.
```python
+from deepeval import run_test
from deepeval.metrics import NonToxicMetric
-from deepeval.evaluator import run_test
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
@@ -196,7 +219,11 @@ test_case = LLMTestCase(
actual_output=actual_output
)
-run_test(test_case, metrics=[non_toxic_metric])
+non_toxic_metric.measure(test_case)
+print(non_toxic_metric.score)
+
+# or
+# evaluate([test_case], [metric])
```
Notice that `expected_output` or `context` are not required as `NonToxicMetric` is a referenceless metric.
@@ -216,9 +243,9 @@ pip install Dbias
`UnBiasedMetric` is similar to `NonToxicMetric` because it is also a referenceless metric.
```python
+from deepeval import run_test
from deepeval.metrics import UnBiasedMetric
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
-from deepeval.evaluator import run_test
input = "What if these shoes don't fit?"
@@ -236,7 +263,11 @@ test_case = LLMTestCase(
actual_output=actual_output
)
-run_test(test_case, [unbias_metric])
+unbias_metric.measure(test_case)
+print(unbias_metric.score)
+
+# or
+# evaluate([test_case], [unbias_metric])
```
## Custom Metrics
@@ -259,6 +290,10 @@ class LengthMetric(BaseMetric):
self.score = 1
else:
self.score = 0
+
+ # You can also set a reason for the score returned.
+ # This is particularly useful for a score computed using LLMs
+ self.reason = "..."
return self.score
def is_successful(self):
diff --git a/docs/docs/evaluation-test-cases.mdx b/docs/docs/evaluation-test-cases.mdx
index e2b1d4c6b..a54f8f928 100644
--- a/docs/docs/evaluation-test-cases.mdx
+++ b/docs/docs/evaluation-test-cases.mdx
@@ -181,8 +181,8 @@ Remember, `context` is the ideal retrieval results for a given input and typical
```python
# A hypothetical LLM application example
import chatbot
+from deepeval import run_test
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import run_test
from deepeval.test_case import LLMTestCase
prompt_template = """
@@ -224,8 +224,8 @@ A test case passes only if all metrics meet their respective evaluation criterio
```python title="test_assert_example.py"
# A hypothetical LLM application example
import chatbot
+from deepeval import assert_test
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import assert_test
from deepeval.test_case import LLMTestCase
prompt_template = """
@@ -276,8 +276,8 @@ Lastly, `deepeval` offers an `evaluate` function to evaluate multiple test cases
```python
# A hypothetical LLM application example
import chatbot
+from deepeval import evaluate
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import evaluate
from deepeval.test_case import LLMTestCase
prompt_template = """
@@ -305,10 +305,10 @@ second_test_case = LLMTestCase(
context=context
)
-dataset = [first_test_case, second_test_case]
+test_cases = [first_test_case, second_test_case]
metric = HallucinationMetric(minimum_score=0.7)
-evaluate(dataset, [metric])
+evaluate(test_cases, [metric])
```
Similar to `assert_test`, `evaluate` allows you to log and view test results on Confident AI. For more examples of `evalute`, visit the [datasets section](evaluation-datasets).
diff --git a/docs/docs/getting-started.mdx b/docs/docs/getting-started.mdx
index d380a3d6b..811802a6a 100644
--- a/docs/docs/getting-started.mdx
+++ b/docs/docs/getting-started.mdx
@@ -50,9 +50,9 @@ Run `touch test_example.py` to create a test file in your root directory. Open `
```python title="test_example.py"
import pytest
+from deepeval import assert_test
from deepeval.metrics import HallucinationMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import assert_test
def test_hallucination():
input = "What if these shoes don't fit?"
@@ -96,8 +96,8 @@ export DEEPEVAL_RESULTS_FOLDER="./data"
An LLM evaluated metric, is one where evaluation is carried out by an LLM. Here's how you can create a custom, LLM evaluated metric.
```python title="test_example.py"
+from deepeval import assert_test
from deepeval.metrics import LLMEvalMetric
-from deepeval.evaluator import assert_test
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
...
@@ -221,9 +221,9 @@ Utilize the `@pytest.mark.parametrize` decorator to loop through and evaluate yo
```python title="test_bulk.py"
import pytest
+from deepeval import assert_test
from deepeval.metrics import HallucinationMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import assert_test
dataset = [
{
@@ -304,7 +304,7 @@ You should now see a link being returned upon test completion. Paste it in your
You can also view individual test cases for enhanced debugging:
-![ok](https://d2lsxfc3p6r9rv.cloudfront.net/test-cases.png)
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/confident-test-cases.png)
### Compare Hyperparameters
@@ -331,7 +331,7 @@ def hyperparameters():
Execute `deepeval test run test_example.py` again to start comparing hyperparmeters for each test run.
-![ok](https://d2lsxfc3p6r9rv.cloudfront.net/dashboard3.png)
+![ok](https://d2lsxfc3p6r9rv.cloudfront.net/compare-hyperparameters.png)
## Full Example
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 5888e21b4..f3acf37de 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -16,7 +16,19 @@ module.exports = {
'evaluation-test-cases',
'evaluation-metrics',
'evaluation-datasets',
- 'evaluation-tracing',
+ ],
+ collapsed: false,
+ },
+ {
+ type: 'category',
+ label: 'Confident AI',
+ items: [
+ 'confident-ai-introduction',
+ 'confident-ai-manage-datasets',
+ 'confident-ai-evaluate-datasets',
+ 'confident-ai-analyze-evaluations',
+ 'confident-ai-debug-evaluations',
+ 'confident-ai-track-events'
],
collapsed: false,
},
diff --git a/docs/src/css/custom.css b/docs/src/css/custom.css
index 123678b9e..531b95107 100644
--- a/docs/src/css/custom.css
+++ b/docs/src/css/custom.css
@@ -111,3 +111,17 @@ html[data-theme='dark'] .header-discord-link:before {
html[data-theme='light'] #invertable-img {
filter: invert(100%);
}
+
+#confident-workflow {
+ width: 70%;
+}
+
+html[data-theme='dark'] #confident-workflow {
+ filter: invert(100%);
+}
+
+@media (max-width: 600px) {
+ #confident-workflow {
+ width: 100%!important
+ }
+}
\ No newline at end of file
diff --git a/examples/getting_started/test_example.py b/examples/getting_started/test_example.py
index addc99ed8..d910a2526 100644
--- a/examples/getting_started/test_example.py
+++ b/examples/getting_started/test_example.py
@@ -1,8 +1,8 @@
import pytest
+import deepeval
+from deepeval import assert_test
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
-from deepeval.evaluator import assert_test
from deepeval.metrics import BaseMetric, LLMEvalMetric, HallucinationMetric
-import deepeval
# To run this file: deepeval test run .py
@@ -104,7 +104,7 @@ def test_everything():
@deepeval.set_hyperparameters
def hyperparameters():
return {
- "model": "GPT-4",
+ "model": "GPT-3",
"prompt_template": """You are a helpful assistant, answer the following question in a non-judgemental tone.
Question:
diff --git a/examples/tracing/test_chatbot.py b/examples/tracing/test_chatbot.py
index 98861909b..62177cf5d 100644
--- a/examples/tracing/test_chatbot.py
+++ b/examples/tracing/test_chatbot.py
@@ -1,5 +1,7 @@
from deepeval.tracing import trace, TraceType
-import openai
+from openai import OpenAI
+
+client = OpenAI()
class Chatbot:
@@ -8,7 +10,7 @@ def __init__(self):
@trace(type=TraceType.LLM, name="OpenAI", model="gpt-4")
def llm(self, input):
- response = openai.ChatCompletion.create(
+ response = client.chat.completions.create(
model="gpt-4",
messages=[
{
@@ -26,10 +28,14 @@ def llm(self, input):
model="text-embedding-ada-002",
)
def get_embedding(self, input):
- response = openai.Embedding.create(
- input=input, model="text-embedding-ada-002"
+ response = (
+ client.embeddings.create(
+ input=input, model="text-embedding-ada-002"
+ )
+ .data[0]
+ .embedding
)
- return response["data"][0]["embedding"]
+ return response
@trace(type=TraceType.RETRIEVER, name="Retriever")
def retriever(self, input=input):
@@ -62,9 +68,9 @@ def query(self, user_input=input):
import pytest
+from deepeval import assert_test
from deepeval.test_case import LLMTestCase
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import assert_test
chatbot = Chatbot()
diff --git a/poetry.lock b/poetry.lock
index e55294055..624c4cb0a 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -13,87 +13,87 @@ files = [
[[package]]
name = "aiohttp"
-version = "3.9.0"
+version = "3.9.1"
description = "Async http client/server framework (asyncio)"
optional = false
python-versions = ">=3.8"
files = [
- {file = "aiohttp-3.9.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6896b8416be9ada4d22cd359d7cb98955576ce863eadad5596b7cdfbf3e17c6c"},
- {file = "aiohttp-3.9.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1736d87dad8ef46a8ec9cddd349fa9f7bd3a064c47dd6469c0d6763d3d49a4fc"},
- {file = "aiohttp-3.9.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8c9e5f4d7208cda1a2bb600e29069eecf857e6980d0ccc922ccf9d1372c16f4b"},
- {file = "aiohttp-3.9.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8488519aa05e636c5997719fe543c8daf19f538f4fa044f3ce94bee608817cff"},
- {file = "aiohttp-3.9.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5ab16c254e2312efeb799bc3c06897f65a133b38b69682bf75d1f1ee1a9c43a9"},
- {file = "aiohttp-3.9.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7a94bde005a8f926d0fa38b88092a03dea4b4875a61fbcd9ac6f4351df1b57cd"},
- {file = "aiohttp-3.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4b777c9286b6c6a94f50ddb3a6e730deec327e9e2256cb08b5530db0f7d40fd8"},
- {file = "aiohttp-3.9.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:571760ad7736b34d05597a1fd38cbc7d47f7b65deb722cb8e86fd827404d1f6b"},
- {file = "aiohttp-3.9.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:deac0a32aec29608eb25d730f4bc5a261a65b6c48ded1ed861d2a1852577c932"},
- {file = "aiohttp-3.9.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:4ee1b4152bc3190cc40ddd6a14715e3004944263ea208229ab4c297712aa3075"},
- {file = "aiohttp-3.9.0-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:3607375053df58ed6f23903aa10cf3112b1240e8c799d243bbad0f7be0666986"},
- {file = "aiohttp-3.9.0-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:65b0a70a25456d329a5e1426702dde67be0fb7a4ead718005ba2ca582d023a94"},
- {file = "aiohttp-3.9.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5a2eb5311a37fe105aa35f62f75a078537e1a9e4e1d78c86ec9893a3c97d7a30"},
- {file = "aiohttp-3.9.0-cp310-cp310-win32.whl", hash = "sha256:2cbc14a13fb6b42d344e4f27746a4b03a2cb0c1c3c5b932b0d6ad8881aa390e3"},
- {file = "aiohttp-3.9.0-cp310-cp310-win_amd64.whl", hash = "sha256:ac9669990e2016d644ba8ae4758688534aabde8dbbc81f9af129c3f5f01ca9cd"},
- {file = "aiohttp-3.9.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:f8e05f5163528962ce1d1806fce763ab893b1c5b7ace0a3538cd81a90622f844"},
- {file = "aiohttp-3.9.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4afa8f71dba3a5a2e1e1282a51cba7341ae76585345c43d8f0e624882b622218"},
- {file = "aiohttp-3.9.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f929f4c9b9a00f3e6cc0587abb95ab9c05681f8b14e0fe1daecfa83ea90f8318"},
- {file = "aiohttp-3.9.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:28185e36a78d247c55e9fbea2332d16aefa14c5276a582ce7a896231c6b1c208"},
- {file = "aiohttp-3.9.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a486ddf57ab98b6d19ad36458b9f09e6022de0381674fe00228ca7b741aacb2f"},
- {file = "aiohttp-3.9.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:70e851f596c00f40a2f00a46126c95c2e04e146015af05a9da3e4867cfc55911"},
- {file = "aiohttp-3.9.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c5b7bf8fe4d39886adc34311a233a2e01bc10eb4e842220235ed1de57541a896"},
- {file = "aiohttp-3.9.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c67a51ea415192c2e53e4e048c78bab82d21955b4281d297f517707dc836bf3d"},
- {file = "aiohttp-3.9.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:694df243f394629bcae2d8ed94c589a181e8ba8604159e6e45e7b22e58291113"},
- {file = "aiohttp-3.9.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:3dd8119752dd30dd7bca7d4bc2a92a59be6a003e4e5c2cf7e248b89751b8f4b7"},
- {file = "aiohttp-3.9.0-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:eb6dfd52063186ac97b4caa25764cdbcdb4b10d97f5c5f66b0fa95052e744eb7"},
- {file = "aiohttp-3.9.0-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:d97c3e286d0ac9af6223bc132dc4bad6540b37c8d6c0a15fe1e70fb34f9ec411"},
- {file = "aiohttp-3.9.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:816f4db40555026e4cdda604a1088577c1fb957d02f3f1292e0221353403f192"},
- {file = "aiohttp-3.9.0-cp311-cp311-win32.whl", hash = "sha256:3abf0551874fecf95f93b58f25ef4fc9a250669a2257753f38f8f592db85ddea"},
- {file = "aiohttp-3.9.0-cp311-cp311-win_amd64.whl", hash = "sha256:e18d92c3e9e22553a73e33784fcb0ed484c9874e9a3e96c16a8d6a1e74a0217b"},
- {file = "aiohttp-3.9.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:99ae01fb13a618b9942376df77a1f50c20a281390dad3c56a6ec2942e266220d"},
- {file = "aiohttp-3.9.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:05857848da443c8c12110d99285d499b4e84d59918a21132e45c3f0804876994"},
- {file = "aiohttp-3.9.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:317719d7f824eba55857fe0729363af58e27c066c731bc62cd97bc9c3d9c7ea4"},
- {file = "aiohttp-3.9.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1e3b3c107ccb0e537f309f719994a55621acd2c8fdf6d5ce5152aed788fb940"},
- {file = "aiohttp-3.9.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:45820ddbb276113ead8d4907a7802adb77548087ff5465d5c554f9aa3928ae7d"},
- {file = "aiohttp-3.9.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:05a183f1978802588711aed0dea31e697d760ce9055292db9dc1604daa9a8ded"},
- {file = "aiohttp-3.9.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:51a4cd44788ea0b5e6bb8fa704597af3a30be75503a7ed1098bc5b8ffdf6c982"},
- {file = "aiohttp-3.9.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:673343fbc0c1ac44d0d2640addc56e97a052504beacd7ade0dc5e76d3a4c16e8"},
- {file = "aiohttp-3.9.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:7e8a3b79b6d186a9c99761fd4a5e8dd575a48d96021f220ac5b5fa856e5dd029"},
- {file = "aiohttp-3.9.0-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:6777a390e41e78e7c45dab43a4a0196c55c3b8c30eebe017b152939372a83253"},
- {file = "aiohttp-3.9.0-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:7ae5f99a32c53731c93ac3075abd3e1e5cfbe72fc3eaac4c27c9dd64ba3b19fe"},
- {file = "aiohttp-3.9.0-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:f1e4f254e9c35d8965d377e065c4a8a55d396fe87c8e7e8429bcfdeeb229bfb3"},
- {file = "aiohttp-3.9.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:11ca808f9a6b63485059f5f6e164ef7ec826483c1212a44f268b3653c91237d8"},
- {file = "aiohttp-3.9.0-cp312-cp312-win32.whl", hash = "sha256:de3cc86f4ea8b4c34a6e43a7306c40c1275e52bfa9748d869c6b7d54aa6dad80"},
- {file = "aiohttp-3.9.0-cp312-cp312-win_amd64.whl", hash = "sha256:ca4fddf84ac7d8a7d0866664936f93318ff01ee33e32381a115b19fb5a4d1202"},
- {file = "aiohttp-3.9.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:f09960b5bb1017d16c0f9e9f7fc42160a5a49fa1e87a175fd4a2b1a1833ea0af"},
- {file = "aiohttp-3.9.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8303531e2c17b1a494ffaeba48f2da655fe932c4e9a2626c8718403c83e5dd2b"},
- {file = "aiohttp-3.9.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4790e44f46a4aa07b64504089def5744d3b6780468c4ec3a1a36eb7f2cae9814"},
- {file = "aiohttp-3.9.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1d7edf74a36de0e5ca50787e83a77cf352f5504eb0ffa3f07000a911ba353fb"},
- {file = "aiohttp-3.9.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:94697c7293199c2a2551e3e3e18438b4cba293e79c6bc2319f5fd652fccb7456"},
- {file = "aiohttp-3.9.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a1b66dbb8a7d5f50e9e2ea3804b01e766308331d0cac76eb30c563ac89c95985"},
- {file = "aiohttp-3.9.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9623cfd9e85b76b83ef88519d98326d4731f8d71869867e47a0b979ffec61c73"},
- {file = "aiohttp-3.9.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f32c86dc967ab8c719fd229ce71917caad13cc1e8356ee997bf02c5b368799bf"},
- {file = "aiohttp-3.9.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f50b4663c3e0262c3a361faf440761fbef60ccdde5fe8545689a4b3a3c149fb4"},
- {file = "aiohttp-3.9.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:dcf71c55ec853826cd70eadb2b6ac62ec577416442ca1e0a97ad875a1b3a0305"},
- {file = "aiohttp-3.9.0-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:42fe4fd9f0dfcc7be4248c162d8056f1d51a04c60e53366b0098d1267c4c9da8"},
- {file = "aiohttp-3.9.0-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:76a86a9989ebf82ee61e06e2bab408aec4ea367dc6da35145c3352b60a112d11"},
- {file = "aiohttp-3.9.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:f9e09a1c83521d770d170b3801eea19b89f41ccaa61d53026ed111cb6f088887"},
- {file = "aiohttp-3.9.0-cp38-cp38-win32.whl", hash = "sha256:a00ce44c21612d185c5275c5cba4bab8d7c1590f248638b667ed8a782fa8cd6f"},
- {file = "aiohttp-3.9.0-cp38-cp38-win_amd64.whl", hash = "sha256:d5b9345ab92ebe6003ae11d8092ce822a0242146e6fa270889b9ba965457ca40"},
- {file = "aiohttp-3.9.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:98d21092bf2637c5fa724a428a69e8f5955f2182bff61f8036827cf6ce1157bf"},
- {file = "aiohttp-3.9.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:35a68cd63ca6aaef5707888f17a70c36efe62b099a4e853d33dc2e9872125be8"},
- {file = "aiohttp-3.9.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3d7f6235c7475658acfc1769d968e07ab585c79f6ca438ddfecaa9a08006aee2"},
- {file = "aiohttp-3.9.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:db04d1de548f7a62d1dd7e7cdf7c22893ee168e22701895067a28a8ed51b3735"},
- {file = "aiohttp-3.9.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:536b01513d67d10baf6f71c72decdf492fb7433c5f2f133e9a9087379d4b6f31"},
- {file = "aiohttp-3.9.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c8b0a6487e8109427ccf638580865b54e2e3db4a6e0e11c02639231b41fc0f"},
- {file = "aiohttp-3.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7276fe0017664414fdc3618fca411630405f1aaf0cc3be69def650eb50441787"},
- {file = "aiohttp-3.9.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:23170247ef89ffa842a02bbfdc425028574d9e010611659abeb24d890bc53bb8"},
- {file = "aiohttp-3.9.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b1a2ea8252cacc7fd51df5a56d7a2bb1986ed39be9397b51a08015727dfb69bd"},
- {file = "aiohttp-3.9.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2d71abc15ff7047412ef26bf812dfc8d0d1020d664617f4913df2df469f26b76"},
- {file = "aiohttp-3.9.0-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:2d820162c8c2bdbe97d328cd4f417c955ca370027dce593345e437b2e9ffdc4d"},
- {file = "aiohttp-3.9.0-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:2779f5e7c70f7b421915fd47db332c81de365678180a9f3ab404088f87ba5ff9"},
- {file = "aiohttp-3.9.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:366bc870d7ac61726f32a489fbe3d1d8876e87506870be66b01aeb84389e967e"},
- {file = "aiohttp-3.9.0-cp39-cp39-win32.whl", hash = "sha256:1df43596b826022b14998f0460926ce261544fedefe0d2f653e1b20f49e96454"},
- {file = "aiohttp-3.9.0-cp39-cp39-win_amd64.whl", hash = "sha256:9c196b30f1b1aa3363a69dd69079ae9bec96c2965c4707eaa6914ba099fb7d4f"},
- {file = "aiohttp-3.9.0.tar.gz", hash = "sha256:09f23292d29135025e19e8ff4f0a68df078fe4ee013bca0105b2e803989de92d"},
+ {file = "aiohttp-3.9.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e1f80197f8b0b846a8d5cf7b7ec6084493950d0882cc5537fb7b96a69e3c8590"},
+ {file = "aiohttp-3.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c72444d17777865734aa1a4d167794c34b63e5883abb90356a0364a28904e6c0"},
+ {file = "aiohttp-3.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9b05d5cbe9dafcdc733262c3a99ccf63d2f7ce02543620d2bd8db4d4f7a22f83"},
+ {file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c4fa235d534b3547184831c624c0b7c1e262cd1de847d95085ec94c16fddcd5"},
+ {file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:289ba9ae8e88d0ba16062ecf02dd730b34186ea3b1e7489046fc338bdc3361c4"},
+ {file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bff7e2811814fa2271be95ab6e84c9436d027a0e59665de60edf44e529a42c1f"},
+ {file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81b77f868814346662c96ab36b875d7814ebf82340d3284a31681085c051320f"},
+ {file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b9c7426923bb7bd66d409da46c41e3fb40f5caf679da624439b9eba92043fa6"},
+ {file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8d44e7bf06b0c0a70a20f9100af9fcfd7f6d9d3913e37754c12d424179b4e48f"},
+ {file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:22698f01ff5653fe66d16ffb7658f582a0ac084d7da1323e39fd9eab326a1f26"},
+ {file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ca7ca5abfbfe8d39e653870fbe8d7710be7a857f8a8386fc9de1aae2e02ce7e4"},
+ {file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:8d7f98fde213f74561be1d6d3fa353656197f75d4edfbb3d94c9eb9b0fc47f5d"},
+ {file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5216b6082c624b55cfe79af5d538e499cd5f5b976820eac31951fb4325974501"},
+ {file = "aiohttp-3.9.1-cp310-cp310-win32.whl", hash = "sha256:0e7ba7ff228c0d9a2cd66194e90f2bca6e0abca810b786901a569c0de082f489"},
+ {file = "aiohttp-3.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:c7e939f1ae428a86e4abbb9a7c4732bf4706048818dfd979e5e2839ce0159f23"},
+ {file = "aiohttp-3.9.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:df9cf74b9bc03d586fc53ba470828d7b77ce51b0582d1d0b5b2fb673c0baa32d"},
+ {file = "aiohttp-3.9.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ecca113f19d5e74048c001934045a2b9368d77b0b17691d905af18bd1c21275e"},
+ {file = "aiohttp-3.9.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8cef8710fb849d97c533f259103f09bac167a008d7131d7b2b0e3a33269185c0"},
+ {file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bea94403a21eb94c93386d559bce297381609153e418a3ffc7d6bf772f59cc35"},
+ {file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91c742ca59045dce7ba76cab6e223e41d2c70d79e82c284a96411f8645e2afff"},
+ {file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6c93b7c2e52061f0925c3382d5cb8980e40f91c989563d3d32ca280069fd6a87"},
+ {file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee2527134f95e106cc1653e9ac78846f3a2ec1004cf20ef4e02038035a74544d"},
+ {file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11ff168d752cb41e8492817e10fb4f85828f6a0142b9726a30c27c35a1835f01"},
+ {file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b8c3a67eb87394386847d188996920f33b01b32155f0a94f36ca0e0c635bf3e3"},
+ {file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:c7b5d5d64e2a14e35a9240b33b89389e0035e6de8dbb7ffa50d10d8b65c57449"},
+ {file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:69985d50a2b6f709412d944ffb2e97d0be154ea90600b7a921f95a87d6f108a2"},
+ {file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:c9110c06eaaac7e1f5562caf481f18ccf8f6fdf4c3323feab28a93d34cc646bd"},
+ {file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d737e69d193dac7296365a6dcb73bbbf53bb760ab25a3727716bbd42022e8d7a"},
+ {file = "aiohttp-3.9.1-cp311-cp311-win32.whl", hash = "sha256:4ee8caa925aebc1e64e98432d78ea8de67b2272252b0a931d2ac3bd876ad5544"},
+ {file = "aiohttp-3.9.1-cp311-cp311-win_amd64.whl", hash = "sha256:a34086c5cc285be878622e0a6ab897a986a6e8bf5b67ecb377015f06ed316587"},
+ {file = "aiohttp-3.9.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f800164276eec54e0af5c99feb9494c295118fc10a11b997bbb1348ba1a52065"},
+ {file = "aiohttp-3.9.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:500f1c59906cd142d452074f3811614be04819a38ae2b3239a48b82649c08821"},
+ {file = "aiohttp-3.9.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0b0a6a36ed7e164c6df1e18ee47afbd1990ce47cb428739d6c99aaabfaf1b3af"},
+ {file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69da0f3ed3496808e8cbc5123a866c41c12c15baaaead96d256477edf168eb57"},
+ {file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:176df045597e674fa950bf5ae536be85699e04cea68fa3a616cf75e413737eb5"},
+ {file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b796b44111f0cab6bbf66214186e44734b5baab949cb5fb56154142a92989aeb"},
+ {file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f27fdaadce22f2ef950fc10dcdf8048407c3b42b73779e48a4e76b3c35bca26c"},
+ {file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bcb6532b9814ea7c5a6a3299747c49de30e84472fa72821b07f5a9818bce0f66"},
+ {file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:54631fb69a6e44b2ba522f7c22a6fb2667a02fd97d636048478db2fd8c4e98fe"},
+ {file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:4b4c452d0190c5a820d3f5c0f3cd8a28ace48c54053e24da9d6041bf81113183"},
+ {file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:cae4c0c2ca800c793cae07ef3d40794625471040a87e1ba392039639ad61ab5b"},
+ {file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:565760d6812b8d78d416c3c7cfdf5362fbe0d0d25b82fed75d0d29e18d7fc30f"},
+ {file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:54311eb54f3a0c45efb9ed0d0a8f43d1bc6060d773f6973efd90037a51cd0a3f"},
+ {file = "aiohttp-3.9.1-cp312-cp312-win32.whl", hash = "sha256:85c3e3c9cb1d480e0b9a64c658cd66b3cfb8e721636ab8b0e746e2d79a7a9eed"},
+ {file = "aiohttp-3.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:11cb254e397a82efb1805d12561e80124928e04e9c4483587ce7390b3866d213"},
+ {file = "aiohttp-3.9.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:8a22a34bc594d9d24621091d1b91511001a7eea91d6652ea495ce06e27381f70"},
+ {file = "aiohttp-3.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:598db66eaf2e04aa0c8900a63b0101fdc5e6b8a7ddd805c56d86efb54eb66672"},
+ {file = "aiohttp-3.9.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2c9376e2b09895c8ca8b95362283365eb5c03bdc8428ade80a864160605715f1"},
+ {file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41473de252e1797c2d2293804e389a6d6986ef37cbb4a25208de537ae32141dd"},
+ {file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9c5857612c9813796960c00767645cb5da815af16dafb32d70c72a8390bbf690"},
+ {file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ffcd828e37dc219a72c9012ec44ad2e7e3066bec6ff3aaa19e7d435dbf4032ca"},
+ {file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:219a16763dc0294842188ac8a12262b5671817042b35d45e44fd0a697d8c8361"},
+ {file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f694dc8a6a3112059258a725a4ebe9acac5fe62f11c77ac4dcf896edfa78ca28"},
+ {file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:bcc0ea8d5b74a41b621ad4a13d96c36079c81628ccc0b30cfb1603e3dfa3a014"},
+ {file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:90ec72d231169b4b8d6085be13023ece8fa9b1bb495e4398d847e25218e0f431"},
+ {file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:cf2a0ac0615842b849f40c4d7f304986a242f1e68286dbf3bd7a835e4f83acfd"},
+ {file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:0e49b08eafa4f5707ecfb321ab9592717a319e37938e301d462f79b4e860c32a"},
+ {file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2c59e0076ea31c08553e868cec02d22191c086f00b44610f8ab7363a11a5d9d8"},
+ {file = "aiohttp-3.9.1-cp38-cp38-win32.whl", hash = "sha256:4831df72b053b1eed31eb00a2e1aff6896fb4485301d4ccb208cac264b648db4"},
+ {file = "aiohttp-3.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:3135713c5562731ee18f58d3ad1bf41e1d8883eb68b363f2ffde5b2ea4b84cc7"},
+ {file = "aiohttp-3.9.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cfeadf42840c1e870dc2042a232a8748e75a36b52d78968cda6736de55582766"},
+ {file = "aiohttp-3.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:70907533db712f7aa791effb38efa96f044ce3d4e850e2d7691abd759f4f0ae0"},
+ {file = "aiohttp-3.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cdefe289681507187e375a5064c7599f52c40343a8701761c802c1853a504558"},
+ {file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7481f581251bb5558ba9f635db70908819caa221fc79ee52a7f58392778c636"},
+ {file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:49f0c1b3c2842556e5de35f122fc0f0b721334ceb6e78c3719693364d4af8499"},
+ {file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d406b01a9f5a7e232d1b0d161b40c05275ffbcbd772dc18c1d5a570961a1ca4"},
+ {file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d8e4450e7fe24d86e86b23cc209e0023177b6d59502e33807b732d2deb6975f"},
+ {file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c0266cd6f005e99f3f51e583012de2778e65af6b73860038b968a0a8888487a"},
+ {file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab221850108a4a063c5b8a70f00dd7a1975e5a1713f87f4ab26a46e5feac5a0e"},
+ {file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c88a15f272a0ad3d7773cf3a37cc7b7d077cbfc8e331675cf1346e849d97a4e5"},
+ {file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:237533179d9747080bcaad4d02083ce295c0d2eab3e9e8ce103411a4312991a0"},
+ {file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:02ab6006ec3c3463b528374c4cdce86434e7b89ad355e7bf29e2f16b46c7dd6f"},
+ {file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04fa38875e53eb7e354ece1607b1d2fdee2d175ea4e4d745f6ec9f751fe20c7c"},
+ {file = "aiohttp-3.9.1-cp39-cp39-win32.whl", hash = "sha256:82eefaf1a996060602f3cc1112d93ba8b201dbf5d8fd9611227de2003dddb3b7"},
+ {file = "aiohttp-3.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:9b05d33ff8e6b269e30a7957bd3244ffbce2a7a35a81b81c382629b80af1a8bf"},
+ {file = "aiohttp-3.9.1.tar.gz", hash = "sha256:8fc49a87ac269d4529da45871e2ffb6874e87779c3d0e2ccd813c0899221239d"},
]
[package.dependencies]
@@ -684,13 +684,13 @@ files = [
[[package]]
name = "dataclasses-json"
-version = "0.6.2"
+version = "0.6.3"
description = "Easily serialize dataclasses to and from JSON."
optional = false
python-versions = ">=3.7,<4.0"
files = [
- {file = "dataclasses_json-0.6.2-py3-none-any.whl", hash = "sha256:71816ced3d0f55a2c5bc1a813ace1b8d4234e79a08744269a7cf84d6f7c06e99"},
- {file = "dataclasses_json-0.6.2.tar.gz", hash = "sha256:1b934c1bd63e775880946b8361a902d7de86e894bab8098eab27c010f95724d1"},
+ {file = "dataclasses_json-0.6.3-py3-none-any.whl", hash = "sha256:4aeb343357997396f6bca1acae64e486c3a723d8f5c76301888abeccf0c45176"},
+ {file = "dataclasses_json-0.6.3.tar.gz", hash = "sha256:35cb40aae824736fdf959801356641836365219cfe14caeb115c39136f775d2a"},
]
[package.dependencies]
@@ -699,26 +699,25 @@ typing-inspect = ">=0.4.0,<1"
[[package]]
name = "datasets"
-version = "2.14.7"
+version = "2.14.4"
description = "HuggingFace community-driven open-source library of datasets"
optional = false
python-versions = ">=3.8.0"
files = [
- {file = "datasets-2.14.7-py3-none-any.whl", hash = "sha256:1a64041a7da4f4130f736fc371c1f528b8ddd208cebe156400f65719bdbba79d"},
- {file = "datasets-2.14.7.tar.gz", hash = "sha256:394cf9b4ec0694b25945977b16ad5d18d5c15fb0e94141713eb8ead7452caf9e"},
+ {file = "datasets-2.14.4-py3-none-any.whl", hash = "sha256:29336bd316a7d827ccd4da2236596279b20ca2ac78f64c04c9483da7cbc2459b"},
+ {file = "datasets-2.14.4.tar.gz", hash = "sha256:ef29c2b5841de488cd343cfc26ab979bff77efa4d2285af51f1ad7db5c46a83b"},
]
[package.dependencies]
aiohttp = "*"
dill = ">=0.3.0,<0.3.8"
-fsspec = {version = ">=2023.1.0,<=2023.10.0", extras = ["http"]}
+fsspec = {version = ">=2021.11.1", extras = ["http"]}
huggingface-hub = ">=0.14.0,<1.0.0"
multiprocess = "*"
numpy = ">=1.17"
packaging = "*"
pandas = "*"
pyarrow = ">=8.0.0"
-pyarrow-hotfix = "*"
pyyaml = ">=5.1"
requests = ">=2.19.0"
tqdm = ">=4.62.1"
@@ -861,38 +860,53 @@ files = [
[[package]]
name = "fonttools"
-version = "4.45.1"
+version = "4.46.0"
description = "Tools to manipulate font files"
optional = false
python-versions = ">=3.8"
files = [
- {file = "fonttools-4.45.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:45fa321c458ea29224067700954ec44493ae869b47e7c5485a350a149a19fb53"},
- {file = "fonttools-4.45.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0dc7617d96b1e668eea9250e1c1fe62d0c78c3f69573ce7e3332cc40e6d84356"},
- {file = "fonttools-4.45.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03ed3bda541e86725f6b4e1b94213f13ed1ae51a5a1f167028534cedea38c010"},
- {file = "fonttools-4.45.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c4f4a5870e3b56788fb196da8cf30d0dfd51a76dc3b907861d018165f76ae4c2"},
- {file = "fonttools-4.45.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:a3c11d9687479f01eddef729aa737abcdea0a44fdaffb62a930a18892f186c9b"},
- {file = "fonttools-4.45.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:316cec50581e844c3ab69d7c82455b54c7cf18236b2f09e722faf665fbfcac58"},
- {file = "fonttools-4.45.1-cp310-cp310-win32.whl", hash = "sha256:e2277cba9f0b525e30de2a9ad3cb4219aa4bc697230c1645666b0deee9f914f0"},
- {file = "fonttools-4.45.1-cp310-cp310-win_amd64.whl", hash = "sha256:1b9e9ad2bcded9a1431afaa57c8d3c39143ac1f050862d66bddd863c515464a2"},
- {file = "fonttools-4.45.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:ff6a698bdd435d24c379f6e8a54908cd9bb7dda23719084d56bf8c87709bf3bd"},
- {file = "fonttools-4.45.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c980d60cd6ec1376206fe55013d166e5627ad0b149b5c81e74eaa913ab6134f"},
- {file = "fonttools-4.45.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a12dee6523c02ca78aeedd0a5e12bfa9b7b29896350edd5241542897b072ae23"},
- {file = "fonttools-4.45.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37cd1ced6efb3dd6fe82e9f9bf92fd74ac58a5aefc284045f59ecd517a5fb9ab"},
- {file = "fonttools-4.45.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:e3d24248221bd7151dfff0d88b1b5da02dccd7134bd576ce8888199827bbaa19"},
- {file = "fonttools-4.45.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:ba6c23591427844dfb0a13658f1718489de75de6a46b64234584c0d17573162d"},
- {file = "fonttools-4.45.1-cp311-cp311-win32.whl", hash = "sha256:cebcddbe9351b67166292b4f71ffdbfcce01ba4b07d4267824eb46b277aeb19a"},
- {file = "fonttools-4.45.1-cp311-cp311-win_amd64.whl", hash = "sha256:f22eb69996a0bd49f76bdefb30be54ce8dbb89a0d1246874d610f05c2aa2e69e"},
- {file = "fonttools-4.45.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:794de93e83297db7b4943f2431e206d8b1ea69cb3ae14638a49cc50332bf0db8"},
- {file = "fonttools-4.45.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:4ba17822a6681d06849078daaf6e03eccc9f467efe7c4c60280e28a78e8e5df9"},
- {file = "fonttools-4.45.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e50f794d09df0675da8d9dbd7c66bfcab2f74a708343aabcad41936d26556891"},
- {file = "fonttools-4.45.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b07b857d4f9de3199a8c3d1b1bf2078c0f37447891ca1a8d9234106b9a27aff"},
- {file = "fonttools-4.45.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:777ba42b94a27bb7fb2b4082522fccfd345667c32a56011e1c3e105979af5b79"},
- {file = "fonttools-4.45.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:21e96b99878348c74aa58059b8578d7586f9519cbcdadacf56486737038aa043"},
- {file = "fonttools-4.45.1-cp312-cp312-win32.whl", hash = "sha256:5cbf02cda8465b69769d07385f5d11e7bba19954e7787792f46fe679ec755ebb"},
- {file = "fonttools-4.45.1-cp312-cp312-win_amd64.whl", hash = "sha256:800e354e0c3afaeb8d9552769773d02f228e98c37b8cb03041157c3d0687cffc"},
- {file = "fonttools-4.45.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:6eb2c54f7a07c92108daabcf02caf31df97825738db02a28270633946bcda4d0"},
- {file = "fonttools-4.45.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:43a3d267334109ff849c37cf3629476b5feb392ef1d2e464a167b83de8cd599c"},
- {file = "fonttools-4.45.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e1aefc2bf3c43e0f33f995f828a7bbeff4adc9393a7760b11456dbcf14388f6"},
+ {file = "fonttools-4.46.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d4e69e2c7f93b695d2e6f18f709d501d945f65c1d237dafaabdd23cd935a5276"},
+ {file = "fonttools-4.46.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:25852f0c63df0af022f698464a4a80f7d1d5bd974bcd22f995f6b4ad198e32dd"},
+ {file = "fonttools-4.46.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:adab73618d0a328b203a0e242b3eba60a2b5662d9cb2bd16ed9c52af8a7d86af"},
+ {file = "fonttools-4.46.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2cf923a4a556ab4cc4c52f69a4a2db624cf5a2cf360394368b40c5152fe3321e"},
+ {file = "fonttools-4.46.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:87c214197712cc14fd2a4621efce2a9c501a77041232b789568149a8a3161517"},
+ {file = "fonttools-4.46.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:156ae342a1ed1fe38e180de471e98fbf5b2b6ae280fa3323138569c4ca215844"},
+ {file = "fonttools-4.46.0-cp310-cp310-win32.whl", hash = "sha256:c506e3d3a9e898caee4dc094f34b49c5566870d5a2d1ca2125f0a9f35ecc2205"},
+ {file = "fonttools-4.46.0-cp310-cp310-win_amd64.whl", hash = "sha256:f8bc3973ed58893c4107993e0a7ae34901cb572b5e798249cbef35d30801ffd4"},
+ {file = "fonttools-4.46.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:982f69855ac258260f51048d9e0c53c5f19881138cc7ca06deb38dc4b97404b6"},
+ {file = "fonttools-4.46.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:2c23c59d321d62588620f2255cf951270bf637d88070f38ed8b5e5558775b86c"},
+ {file = "fonttools-4.46.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0e94244ec24a940ecfbe5b31c975c8a575d5ed2d80f9a280ce3b21fa5dc9c34"},
+ {file = "fonttools-4.46.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1a9f9cdd7ef63d1b8ac90db335762451452426b3207abd79f60da510cea62da5"},
+ {file = "fonttools-4.46.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:ca9eceebe70035b057ce549e2054cad73e95cac3fe91a9d827253d1c14618204"},
+ {file = "fonttools-4.46.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8be6adfa4e15977075278dd0a0bae74dec59be7b969b5ceed93fb86af52aa5be"},
+ {file = "fonttools-4.46.0-cp311-cp311-win32.whl", hash = "sha256:7b5636f5706d49f13b6d610fe54ee662336cdf56b5a6f6683c0b803e23d826d2"},
+ {file = "fonttools-4.46.0-cp311-cp311-win_amd64.whl", hash = "sha256:49ea0983e55fd7586a809787cd4644a7ae471e53ab8ddc016f9093b400e32646"},
+ {file = "fonttools-4.46.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:7b460720ce81773da1a3e7cc964c48e1e11942b280619582a897fa0117b56a62"},
+ {file = "fonttools-4.46.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:8bee9f4fc8c99824a424ae45c789ee8c67cb84f8e747afa7f83b7d3cef439c3b"},
+ {file = "fonttools-4.46.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3d7b96aba96e05e8c911ce2dfc5acc6a178b8f44f6aa69371ab91aa587563da"},
+ {file = "fonttools-4.46.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9e6aeb5c340416d11a3209d75c48d13e72deea9e1517837dd1522c1fd1f17c11"},
+ {file = "fonttools-4.46.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:c779f8701deedf41908f287aeb775b8a6f59875ad1002b98ac6034ae4ddc1b7b"},
+ {file = "fonttools-4.46.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:ce199227ce7921eaafdd4f96536f16b232d6b580ce74ce337de544bf06cb2752"},
+ {file = "fonttools-4.46.0-cp312-cp312-win32.whl", hash = "sha256:1c9937c4dd1061afd22643389445fabda858af5e805860ec3082a4bc07c7a720"},
+ {file = "fonttools-4.46.0-cp312-cp312-win_amd64.whl", hash = "sha256:a9fa52ef8fd14d7eb3d813e1451e7ace3e1eebfa9b7237d3f81fee8f3de6a114"},
+ {file = "fonttools-4.46.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:c94564b1f3b5dd87e73577610d85115b1936edcc596deaf84a31bbe70e17456b"},
+ {file = "fonttools-4.46.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4a50a1dfad7f7ba5ca3f99cc73bf5cdac67ceade8e4b355a877521f20ad1b63"},
+ {file = "fonttools-4.46.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:89c2c520f9492844ecd6316d20c6c7a157b5c0cb73a1411b3db28ee304f30122"},
+ {file = "fonttools-4.46.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e5b7905fd68eacb7cc56a13139da5c312c45baae6950dd00b02563c54508a041"},
+ {file = "fonttools-4.46.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8485cc468288e213f31afdaf1fdda3c79010f542559fbba936a54f4644df2570"},
+ {file = "fonttools-4.46.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:87c3299da7da55394fb324349db0ede38114a46aafd0e7dfcabfecd28cdd94c3"},
+ {file = "fonttools-4.46.0-cp38-cp38-win32.whl", hash = "sha256:f5f1423a504ccc329efb5aa79738de83d38c072be5308788dde6bd419969d7f5"},
+ {file = "fonttools-4.46.0-cp38-cp38-win_amd64.whl", hash = "sha256:6d4a4ebcc76e30898ff3296ea786491c70e183f738319ae2629e0d44f17ece42"},
+ {file = "fonttools-4.46.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:c9a0e422ab79e5cb2b47913be6a4b5fd20c4c7ac34a24f3691a4e099e965e0b8"},
+ {file = "fonttools-4.46.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:13ac0cba2fc63fa4b232f2a7971f35f35c6eaf10bd1271fa96d4ce6253a8acfd"},
+ {file = "fonttools-4.46.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:795150d5edc595e1a2cfb3d65e8f4f3d027704fc2579f8990d381bef6b188eb6"},
+ {file = "fonttools-4.46.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d00fc63131dcac6b25f50a5a129758438317e54e3ce5587163f7058de4b0e933"},
+ {file = "fonttools-4.46.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:3033b55f401a622de2630b3982234d97219d89b058607b87927eccb0f922313c"},
+ {file = "fonttools-4.46.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:e26e7fb908ae4f622813e7cb32cd2db6c24e3122bb3b98f25e832a2fe0e7e228"},
+ {file = "fonttools-4.46.0-cp39-cp39-win32.whl", hash = "sha256:2d0eba685938c603f2f648dfc0aadbf8c6a4fe1c7ca608c2970a6ef39e00f254"},
+ {file = "fonttools-4.46.0-cp39-cp39-win_amd64.whl", hash = "sha256:5200b01f463d97cc2b7ff8a1e3584151f4413e98cb8419da5f17d1dbb84cc214"},
+ {file = "fonttools-4.46.0-py3-none-any.whl", hash = "sha256:5b627ed142398ea9202bd752c04311592558964d1a765fb2f78dc441a05633f4"},
+ {file = "fonttools-4.46.0.tar.gz", hash = "sha256:2ae45716c27a41807d58a9f3f59983bdc8c0a46cb259e4450ab7e196253a9853"},
]
[package.extras]
@@ -981,13 +995,13 @@ files = [
[[package]]
name = "fsspec"
-version = "2023.10.0"
+version = "2023.12.0"
description = "File-system specification"
optional = false
python-versions = ">=3.8"
files = [
- {file = "fsspec-2023.10.0-py3-none-any.whl", hash = "sha256:346a8f024efeb749d2a5fca7ba8854474b1ff9af7c3faaf636a4548781136529"},
- {file = "fsspec-2023.10.0.tar.gz", hash = "sha256:330c66757591df346ad3091a53bd907e15348c2ba17d63fd54f5c39c4457d2a5"},
+ {file = "fsspec-2023.12.0-py3-none-any.whl", hash = "sha256:f807252ee2018f2223760315beb87a2166c2b9532786eeca9e6548dfcf2cfac9"},
+ {file = "fsspec-2023.12.0.tar.gz", hash = "sha256:8e0bb2db2a94082968483b7ba2eaebf3949835e2dfdf09243dda387539464b31"},
]
[package.dependencies]
@@ -1031,13 +1045,13 @@ files = [
[[package]]
name = "google-auth"
-version = "2.23.4"
+version = "2.24.0"
description = "Google Authentication Library"
optional = false
python-versions = ">=3.7"
files = [
- {file = "google-auth-2.23.4.tar.gz", hash = "sha256:79905d6b1652187def79d491d6e23d0cbb3a21d3c7ba0dbaa9c8a01906b13ff3"},
- {file = "google_auth-2.23.4-py2.py3-none-any.whl", hash = "sha256:d4bbc92fe4b8bfd2f3e8d88e5ba7085935da208ee38a134fc280e7ce682a05f2"},
+ {file = "google-auth-2.24.0.tar.gz", hash = "sha256:2ec7b2a506989d7dbfdbe81cb8d0ead8876caaed14f86d29d34483cbe99c57af"},
+ {file = "google_auth-2.24.0-py2.py3-none-any.whl", hash = "sha256:9b82d5c8d3479a5391ea0a46d81cca698d328459da31d4a459d4e901a5d927e0"},
]
[package.dependencies]
@@ -1054,13 +1068,13 @@ requests = ["requests (>=2.20.0,<3.0.0.dev0)"]
[[package]]
name = "google-auth-oauthlib"
-version = "1.1.0"
+version = "1.0.0"
description = "Google Authentication Library"
optional = false
python-versions = ">=3.6"
files = [
- {file = "google-auth-oauthlib-1.1.0.tar.gz", hash = "sha256:83ea8c3b0881e453790baff4448e8a6112ac8778d1de9da0b68010b843937afb"},
- {file = "google_auth_oauthlib-1.1.0-py2.py3-none-any.whl", hash = "sha256:089c6e587d36f4803ac7e0720c045c6a8b1fd1790088b8424975b90d0ee61c12"},
+ {file = "google-auth-oauthlib-1.0.0.tar.gz", hash = "sha256:e375064964820b47221a7e1b7ee1fd77051b6323c3f9e3e19785f78ab67ecfc5"},
+ {file = "google_auth_oauthlib-1.0.0-py2.py3-none-any.whl", hash = "sha256:95880ca704928c300f48194d1770cf5b1462835b6e49db61445a520f793fd5fb"},
]
[package.dependencies]
@@ -1292,19 +1306,19 @@ trio = ["trio (>=0.22.0,<0.23.0)"]
[[package]]
name = "httpx"
-version = "0.25.1"
+version = "0.25.2"
description = "The next generation HTTP client."
optional = false
python-versions = ">=3.8"
files = [
- {file = "httpx-0.25.1-py3-none-any.whl", hash = "sha256:fec7d6cc5c27c578a391f7e87b9aa7d3d8fbcd034f6399f9f79b45bcc12a866a"},
- {file = "httpx-0.25.1.tar.gz", hash = "sha256:ffd96d5cf901e63863d9f1b4b6807861dbea4d301613415d9e6e57ead15fc5d0"},
+ {file = "httpx-0.25.2-py3-none-any.whl", hash = "sha256:a05d3d052d9b2dfce0e3896636467f8a5342fb2b902c819428e1ac65413ca118"},
+ {file = "httpx-0.25.2.tar.gz", hash = "sha256:8b8fcaa0c8ea7b05edd69a094e63a2094c4efcb48129fb757361bc423c0ad9e8"},
]
[package.dependencies]
anyio = "*"
certifi = "*"
-httpcore = "*"
+httpcore = "==1.*"
idna = "*"
sniffio = "*"
@@ -1349,13 +1363,13 @@ typing = ["pydantic (<2.0)", "types-PyYAML", "types-requests", "types-simplejson
[[package]]
name = "idna"
-version = "3.4"
+version = "3.6"
description = "Internationalized Domain Names in Applications (IDNA)"
optional = false
python-versions = ">=3.5"
files = [
- {file = "idna-3.4-py3-none-any.whl", hash = "sha256:90b77e79eaa3eba6de819a0c442c0b4ceefc341a7a2ab77d7562bf49f425c5c2"},
- {file = "idna-3.4.tar.gz", hash = "sha256:814f528e8dead7d329833b91c5faa87d60bf71824cd12a7530b5526063d02cb4"},
+ {file = "idna-3.6-py3-none-any.whl", hash = "sha256:c05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f"},
+ {file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"},
]
[[package]]
@@ -1424,13 +1438,13 @@ files = [
[[package]]
name = "keras"
-version = "2.15.0"
+version = "2.14.0"
description = "Deep learning for humans."
optional = false
-python-versions = ">=3.8"
+python-versions = ">=3.9"
files = [
- {file = "keras-2.15.0-py3-none-any.whl", hash = "sha256:2dcc6d2e30cf9c951064b63c1f4c404b966c59caf09e01f3549138ec8ee0dd1f"},
- {file = "keras-2.15.0.tar.gz", hash = "sha256:81871d298c064dc4ac6b58440fdae67bfcf47c8d7ad28580fab401834c06a575"},
+ {file = "keras-2.14.0-py3-none-any.whl", hash = "sha256:d7429d1d2131cc7eb1f2ea2ec330227c7d9d38dab3dfdf2e78defee4ecc43fcd"},
+ {file = "keras-2.14.0.tar.gz", hash = "sha256:22788bdbc86d9988794fe9703bb5205141da797c4faeeb59497c58c3d94d34ed"},
]
[[package]]
@@ -1548,13 +1562,13 @@ files = [
[[package]]
name = "langchain"
-version = "0.0.340"
+version = "0.0.345"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
- {file = "langchain-0.0.340-py3-none-any.whl", hash = "sha256:f80f40b52ef82424e38e894db8b8048b6505da100679e72613316f8d8b0243fb"},
- {file = "langchain-0.0.340.tar.gz", hash = "sha256:1a6bd2511bbb81e42d2a3d7291ee03de180accab851181ee9fdbb7fbaef6c57c"},
+ {file = "langchain-0.0.345-py3-none-any.whl", hash = "sha256:461a126ec182834c714589ceec47354401d80b903262efab8d669fe941a0a4df"},
+ {file = "langchain-0.0.345.tar.gz", hash = "sha256:2d366513a46e4620d8fe6fc956b9185c2a252e60e2fc0476113482455aaaa9f0"},
]
[package.dependencies]
@@ -1563,6 +1577,7 @@ anyio = "<4.0"
async-timeout = {version = ">=4.0.0,<5.0.0", markers = "python_version < \"3.11\""}
dataclasses-json = ">=0.5.7,<0.7"
jsonpatch = ">=1.33,<2.0"
+langchain-core = ">=0.0.9,<0.1"
langsmith = ">=0.0.63,<0.1.0"
numpy = ">=1,<2"
pydantic = ">=1,<3"
@@ -1572,20 +1587,37 @@ SQLAlchemy = ">=1.4,<3"
tenacity = ">=8.1.0,<9.0.0"
[package.extras]
-all = ["O365 (>=2.0.26,<3.0.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "amadeus (>=8.1.0)", "arxiv (>=1.4,<2.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "awadb (>=0.3.9,<0.4.0)", "azure-ai-formrecognizer (>=3.2.1,<4.0.0)", "azure-ai-textanalytics (>=5.3.0,<6.0.0)", "azure-ai-vision (>=0.11.1b1,<0.12.0)", "azure-cognitiveservices-speech (>=1.28.0,<2.0.0)", "azure-cosmos (>=4.4.0b1,<5.0.0)", "azure-identity (>=1.12.0,<2.0.0)", "beautifulsoup4 (>=4,<5)", "clarifai (>=9.1.0)", "clickhouse-connect (>=0.5.14,<0.6.0)", "cohere (>=4,<5)", "deeplake (>=3.8.3,<4.0.0)", "docarray[hnswlib] (>=0.32.0,<0.33.0)", "duckduckgo-search (>=3.8.3,<4.0.0)", "elasticsearch (>=8,<9)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "google-api-python-client (==2.70.0)", "google-auth (>=2.18.1,<3.0.0)", "google-search-results (>=2,<3)", "gptcache (>=0.1.7)", "html2text (>=2020.1.16,<2021.0.0)", "huggingface_hub (>=0,<1)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "lancedb (>=0.1,<0.2)", "langkit (>=0.0.6,<0.1.0)", "lark (>=1.1.5,<2.0.0)", "librosa (>=0.10.0.post2,<0.11.0)", "lxml (>=4.9.2,<5.0.0)", "manifest-ml (>=0.0.1,<0.0.2)", "marqo (>=1.2.4,<2.0.0)", "momento (>=1.13.0,<2.0.0)", "nebula3-python (>=3.4.0,<4.0.0)", "neo4j (>=5.8.1,<6.0.0)", "networkx (>=2.6.3,<4)", "nlpcloud (>=1,<2)", "nltk (>=3,<4)", "nomic (>=1.0.43,<2.0.0)", "openai (<2)", "openlm (>=0.0.5,<0.0.6)", "opensearch-py (>=2.0.0,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pexpect (>=4.8.0,<5.0.0)", "pgvector (>=0.1.6,<0.2.0)", "pinecone-client (>=2,<3)", "pinecone-text (>=0.4.2,<0.5.0)", "psycopg2-binary (>=2.9.5,<3.0.0)", "pymongo (>=4.3.3,<5.0.0)", "pyowm (>=3.3.0,<4.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pytesseract (>=0.3.10,<0.4.0)", "python-arango (>=7.5.9,<8.0.0)", "pyvespa (>=0.33.0,<0.34.0)", "qdrant-client (>=1.3.1,<2.0.0)", "rdflib (>=6.3.2,<7.0.0)", "redis (>=4,<5)", "requests-toolbelt (>=1.0.0,<2.0.0)", "sentence-transformers (>=2,<3)", "singlestoredb (>=0.7.1,<0.8.0)", "tensorflow-text (>=2.11.0,<3.0.0)", "tigrisdb (>=1.0.0b6,<2.0.0)", "tiktoken (>=0.3.2,<0.6.0)", "torch (>=1,<3)", "transformers (>=4,<5)", "weaviate-client (>=3,<4)", "wikipedia (>=1,<2)", "wolframalpha (==5.0.0)"]
+all = ["O365 (>=2.0.26,<3.0.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "amadeus (>=8.1.0)", "arxiv (>=1.4,<2.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "awadb (>=0.3.9,<0.4.0)", "azure-ai-formrecognizer (>=3.2.1,<4.0.0)", "azure-ai-textanalytics (>=5.3.0,<6.0.0)", "azure-ai-vision (>=0.11.1b1,<0.12.0)", "azure-cognitiveservices-speech (>=1.28.0,<2.0.0)", "azure-cosmos (>=4.4.0b1,<5.0.0)", "azure-identity (>=1.12.0,<2.0.0)", "beautifulsoup4 (>=4,<5)", "clarifai (>=9.1.0)", "clickhouse-connect (>=0.5.14,<0.6.0)", "cohere (>=4,<5)", "deeplake (>=3.8.3,<4.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "docarray[hnswlib] (>=0.32.0,<0.33.0)", "duckduckgo-search (>=3.8.3,<4.0.0)", "elasticsearch (>=8,<9)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "google-api-python-client (==2.70.0)", "google-auth (>=2.18.1,<3.0.0)", "google-search-results (>=2,<3)", "gptcache (>=0.1.7)", "html2text (>=2020.1.16,<2021.0.0)", "huggingface_hub (>=0,<1)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "lancedb (>=0.1,<0.2)", "langkit (>=0.0.6,<0.1.0)", "lark (>=1.1.5,<2.0.0)", "librosa (>=0.10.0.post2,<0.11.0)", "lxml (>=4.9.2,<5.0.0)", "manifest-ml (>=0.0.1,<0.0.2)", "marqo (>=1.2.4,<2.0.0)", "momento (>=1.13.0,<2.0.0)", "nebula3-python (>=3.4.0,<4.0.0)", "neo4j (>=5.8.1,<6.0.0)", "networkx (>=2.6.3,<4)", "nlpcloud (>=1,<2)", "nltk (>=3,<4)", "nomic (>=1.0.43,<2.0.0)", "openai (<2)", "openlm (>=0.0.5,<0.0.6)", "opensearch-py (>=2.0.0,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pexpect (>=4.8.0,<5.0.0)", "pgvector (>=0.1.6,<0.2.0)", "pinecone-client (>=2,<3)", "pinecone-text (>=0.4.2,<0.5.0)", "psycopg2-binary (>=2.9.5,<3.0.0)", "pymongo (>=4.3.3,<5.0.0)", "pyowm (>=3.3.0,<4.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pytesseract (>=0.3.10,<0.4.0)", "python-arango (>=7.5.9,<8.0.0)", "pyvespa (>=0.33.0,<0.34.0)", "qdrant-client (>=1.3.1,<2.0.0)", "rdflib (>=6.3.2,<7.0.0)", "redis (>=4,<5)", "requests-toolbelt (>=1.0.0,<2.0.0)", "sentence-transformers (>=2,<3)", "singlestoredb (>=0.7.1,<0.8.0)", "tensorflow-text (>=2.11.0,<3.0.0)", "tigrisdb (>=1.0.0b6,<2.0.0)", "tiktoken (>=0.3.2,<0.6.0)", "torch (>=1,<3)", "transformers (>=4,<5)", "weaviate-client (>=3,<4)", "wikipedia (>=1,<2)", "wolframalpha (==5.0.0)"]
azure = ["azure-ai-formrecognizer (>=3.2.1,<4.0.0)", "azure-ai-textanalytics (>=5.3.0,<6.0.0)", "azure-ai-vision (>=0.11.1b1,<0.12.0)", "azure-cognitiveservices-speech (>=1.28.0,<2.0.0)", "azure-core (>=1.26.4,<2.0.0)", "azure-cosmos (>=4.4.0b1,<5.0.0)", "azure-identity (>=1.12.0,<2.0.0)", "azure-search-documents (==11.4.0b8)", "openai (<2)"]
clarifai = ["clarifai (>=9.1.0)"]
cli = ["typer (>=0.9.0,<0.10.0)"]
cohere = ["cohere (>=4,<5)"]
docarray = ["docarray[hnswlib] (>=0.32.0,<0.33.0)"]
embeddings = ["sentence-transformers (>=2,<3)"]
-extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "dashvector (>=1.0.1,<2.0.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.6.0,<0.7.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "html2text (>=2020.1.16,<2021.0.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)"]
+extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "cohere (>=4,<5)", "dashvector (>=1.0.1,<2.0.0)", "databricks-vectorsearch (>=0.21,<0.22)", "datasets (>=2.15.0,<3.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.6.0,<0.7.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "html2text (>=2020.1.16,<2021.0.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "msal (>=1.25.0,<2.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "praw (>=7.7.1,<8.0.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)"]
javascript = ["esprima (>=4.0.1,<5.0.0)"]
llms = ["clarifai (>=9.1.0)", "cohere (>=4,<5)", "huggingface_hub (>=0,<1)", "manifest-ml (>=0.0.1,<0.0.2)", "nlpcloud (>=1,<2)", "openai (<2)", "openlm (>=0.0.5,<0.0.6)", "torch (>=1,<3)", "transformers (>=4,<5)"]
openai = ["openai (<2)", "tiktoken (>=0.3.2,<0.6.0)"]
qdrant = ["qdrant-client (>=1.3.1,<2.0.0)"]
text-helpers = ["chardet (>=5.1.0,<6.0.0)"]
+[[package]]
+name = "langchain-core"
+version = "0.0.9"
+description = "Building applications with LLMs through composability"
+optional = false
+python-versions = ">=3.8.1,<4.0"
+files = [
+ {file = "langchain_core-0.0.9-py3-none-any.whl", hash = "sha256:ce5dcf05804cdc4edf0b06d6691cb3dd4ed0014ed9cc08d02bd2b1691344c137"},
+ {file = "langchain_core-0.0.9.tar.gz", hash = "sha256:d3ba6e30ed57ba2a3cbe227daad81fc4edf29cd3d3e24b418782ba69b07cb07d"},
+]
+
+[package.dependencies]
+jsonpatch = ">=1.33,<2.0"
+langsmith = ">=0.0.63,<0.1.0"
+pydantic = ">=1,<3"
+tenacity = ">=8.1.0,<9.0.0"
+
[[package]]
name = "langcodes"
version = "3.3.0"
@@ -1602,13 +1634,13 @@ data = ["language-data (>=1.1,<2.0)"]
[[package]]
name = "langsmith"
-version = "0.0.66"
+version = "0.0.69"
description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
- {file = "langsmith-0.0.66-py3-none-any.whl", hash = "sha256:e5e6d2deff19de827ac04db106b900091c75b6a3c1a1c047a8aa78caf72a63ea"},
- {file = "langsmith-0.0.66.tar.gz", hash = "sha256:33d011c9db9236c06789b17dba97acc023275bafd0c2bf097283730d6608dea7"},
+ {file = "langsmith-0.0.69-py3-none-any.whl", hash = "sha256:49a2546bb83eedb0552673cf81a068bb08078d6d48471f4f1018e1d5c6aa46b1"},
+ {file = "langsmith-0.0.69.tar.gz", hash = "sha256:8fb5297f274db0576ec650d9bab0319acfbb6622d62bc5bb9fe31c6235dc0358"},
]
[package.dependencies]
@@ -2296,13 +2328,13 @@ signedtoken = ["cryptography (>=3.0.0)", "pyjwt (>=2.0.0,<3)"]
[[package]]
name = "openai"
-version = "1.3.5"
+version = "1.3.7"
description = "The official Python library for the openai API"
optional = false
python-versions = ">=3.7.1"
files = [
- {file = "openai-1.3.5-py3-none-any.whl", hash = "sha256:9437458978fb502e61336c3082e02b09c49feebe0e8516a2b8fb4563e6e4af4e"},
- {file = "openai-1.3.5.tar.gz", hash = "sha256:163e7ece4af76e961f58b75ea20a42b0d0c2a240c2f81b41a3d1c5962463cdf8"},
+ {file = "openai-1.3.7-py3-none-any.whl", hash = "sha256:e5c51367a910297e4d1cd33d2298fb87d7edf681edbe012873925ac16f95bee0"},
+ {file = "openai-1.3.7.tar.gz", hash = "sha256:18074a0f51f9b49d1ae268c7abc36f7f33212a0c0d08ce11b7053ab2d17798de"},
]
[package.dependencies]
@@ -2310,6 +2342,7 @@ anyio = ">=3.5.0,<4"
distro = ">=1.7.0,<2"
httpx = ">=0.23.0,<1"
pydantic = ">=1.9.0,<3"
+sniffio = "*"
tqdm = ">4"
typing-extensions = ">=4.5,<5"
@@ -2602,24 +2635,22 @@ murmurhash = ">=0.28.0,<1.1.0"
[[package]]
name = "protobuf"
-version = "4.23.4"
+version = "4.25.1"
description = ""
optional = false
-python-versions = ">=3.7"
+python-versions = ">=3.8"
files = [
- {file = "protobuf-4.23.4-cp310-abi3-win32.whl", hash = "sha256:5fea3c64d41ea5ecf5697b83e41d09b9589e6f20b677ab3c48e5f242d9b7897b"},
- {file = "protobuf-4.23.4-cp310-abi3-win_amd64.whl", hash = "sha256:7b19b6266d92ca6a2a87effa88ecc4af73ebc5cfde194dc737cf8ef23a9a3b12"},
- {file = "protobuf-4.23.4-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:8547bf44fe8cec3c69e3042f5c4fb3e36eb2a7a013bb0a44c018fc1e427aafbd"},
- {file = "protobuf-4.23.4-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:fee88269a090ada09ca63551bf2f573eb2424035bcf2cb1b121895b01a46594a"},
- {file = "protobuf-4.23.4-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:effeac51ab79332d44fba74660d40ae79985901ac21bca408f8dc335a81aa597"},
- {file = "protobuf-4.23.4-cp37-cp37m-win32.whl", hash = "sha256:c3e0939433c40796ca4cfc0fac08af50b00eb66a40bbbc5dee711998fb0bbc1e"},
- {file = "protobuf-4.23.4-cp37-cp37m-win_amd64.whl", hash = "sha256:9053df6df8e5a76c84339ee4a9f5a2661ceee4a0dab019e8663c50ba324208b0"},
- {file = "protobuf-4.23.4-cp38-cp38-win32.whl", hash = "sha256:e1c915778d8ced71e26fcf43c0866d7499891bca14c4368448a82edc61fdbc70"},
- {file = "protobuf-4.23.4-cp38-cp38-win_amd64.whl", hash = "sha256:351cc90f7d10839c480aeb9b870a211e322bf05f6ab3f55fcb2f51331f80a7d2"},
- {file = "protobuf-4.23.4-cp39-cp39-win32.whl", hash = "sha256:6dd9b9940e3f17077e820b75851126615ee38643c2c5332aa7a359988820c720"},
- {file = "protobuf-4.23.4-cp39-cp39-win_amd64.whl", hash = "sha256:0a5759f5696895de8cc913f084e27fd4125e8fb0914bb729a17816a33819f474"},
- {file = "protobuf-4.23.4-py3-none-any.whl", hash = "sha256:e9d0be5bf34b275b9f87ba7407796556abeeba635455d036c7351f7c183ef8ff"},
- {file = "protobuf-4.23.4.tar.gz", hash = "sha256:ccd9430c0719dce806b93f89c91de7977304729e55377f872a92465d548329a9"},
+ {file = "protobuf-4.25.1-cp310-abi3-win32.whl", hash = "sha256:193f50a6ab78a970c9b4f148e7c750cfde64f59815e86f686c22e26b4fe01ce7"},
+ {file = "protobuf-4.25.1-cp310-abi3-win_amd64.whl", hash = "sha256:3497c1af9f2526962f09329fd61a36566305e6c72da2590ae0d7d1322818843b"},
+ {file = "protobuf-4.25.1-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:0bf384e75b92c42830c0a679b0cd4d6e2b36ae0cf3dbb1e1dfdda48a244f4bcd"},
+ {file = "protobuf-4.25.1-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:0f881b589ff449bf0b931a711926e9ddaad3b35089cc039ce1af50b21a4ae8cb"},
+ {file = "protobuf-4.25.1-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:ca37bf6a6d0046272c152eea90d2e4ef34593aaa32e8873fc14c16440f22d4b7"},
+ {file = "protobuf-4.25.1-cp38-cp38-win32.whl", hash = "sha256:abc0525ae2689a8000837729eef7883b9391cd6aa7950249dcf5a4ede230d5dd"},
+ {file = "protobuf-4.25.1-cp38-cp38-win_amd64.whl", hash = "sha256:1484f9e692091450e7edf418c939e15bfc8fc68856e36ce399aed6889dae8bb0"},
+ {file = "protobuf-4.25.1-cp39-cp39-win32.whl", hash = "sha256:8bdbeaddaac52d15c6dce38c71b03038ef7772b977847eb6d374fc86636fa510"},
+ {file = "protobuf-4.25.1-cp39-cp39-win_amd64.whl", hash = "sha256:becc576b7e6b553d22cbdf418686ee4daa443d7217999125c045ad56322dda10"},
+ {file = "protobuf-4.25.1-py3-none-any.whl", hash = "sha256:a19731d5e83ae4737bb2a089605e636077ac001d18781b3cf489b9546c7c80d6"},
+ {file = "protobuf-4.25.1.tar.gz", hash = "sha256:57d65074b4f5baa4ab5da1605c02be90ac20c8b40fb137d6a8df9f416b0d0ce2"},
]
[[package]]
@@ -2670,17 +2701,6 @@ files = [
[package.dependencies]
numpy = ">=1.16.6"
-[[package]]
-name = "pyarrow-hotfix"
-version = "0.6"
-description = ""
-optional = false
-python-versions = ">=3.5"
-files = [
- {file = "pyarrow_hotfix-0.6-py3-none-any.whl", hash = "sha256:dcc9ae2d220dff0083be6a9aa8e0cdee5182ad358d4931fce825c545e5c89178"},
- {file = "pyarrow_hotfix-0.6.tar.gz", hash = "sha256:79d3e030f7ff890d408a100ac16d6f00b14d44a502d7897cd9fc3e3a534e9945"},
-]
-
[[package]]
name = "pyasn1"
version = "0.5.1"
@@ -3272,109 +3292,109 @@ pyasn1 = ">=0.1.3"
[[package]]
name = "safetensors"
-version = "0.4.0"
+version = "0.4.1"
description = ""
optional = false
python-versions = ">=3.7"
files = [
- {file = "safetensors-0.4.0-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:2289ae6dbe6d027ecee016b28ced13a2e21a0b3a3a757a23033a2d1c0b1bad55"},
- {file = "safetensors-0.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:bf6458959f310f551cbbeef2255527ade5f783f952738e73e4d0136198cc3bfe"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b6b60a58a8f7cc7aed3b5b73dce1f5259a53c83d9ba43a76a874e6ad868c1b4d"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:491b3477e4d0d4599bb75d79da4b75af2e6ed9b1f6ec2b715991f0bc927bf09a"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:59d2e10b7e0cd18bb73ed7c17c624a5957b003b81345e18159591771c26ee428"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3f667a4c12fb593f5f66ce966cb1b14a7148898b2b1a7f79e0761040ae1e3c51"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5f9909512bcb6f712bdd04c296cdfb0d8ff73d258ffc5af884bb62ea02d221e0"},
- {file = "safetensors-0.4.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:d33d29e846821f0e4f92614022949b09ccf063cb36fe2f9fe099cde1efbfbb87"},
- {file = "safetensors-0.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4d512525a8e05a045ce6698066ba0c5378c174a83e0b3720a8c7799dc1bb06f3"},
- {file = "safetensors-0.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:0219cea445177f6ad1f9acd3a8d025440c8ff436d70a4a7c7ba9c36066aa9474"},
- {file = "safetensors-0.4.0-cp310-none-win32.whl", hash = "sha256:67ab171eeaad6972d3971c53d29d53353c67f6743284c6d637b59fa3e54c8a94"},
- {file = "safetensors-0.4.0-cp310-none-win_amd64.whl", hash = "sha256:7ffc736039f08a9ca1f09816a7481b8e4469c06e8f8a5ffa8cb67ddd79e6d77f"},
- {file = "safetensors-0.4.0-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:4fe9e3737b30de458225a23926219ca30b902ee779b6a3df96eaab2b6d625ec2"},
- {file = "safetensors-0.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e7916e814a90008de767b1c164a1d83803693c661ffe9af5a697b22e2752edb0"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cbc4a4da01143472323c145f3c289e5f6fabde0ac0a3414dabf912a21692fff4"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a54c21654a47669b38e359e8f852af754b786c9da884bb61ad5e9af12bd71ccb"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:25cd407955bad5340ba17f9f8ac789a0d751601a311e2f7b2733f9384478c95e"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82e8fc4e3503cd738fd40718a430fe0e5ce6e7ff91a73d6ce628bbb89c41e8ce"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48b92059b1a4ad163024d4f526e0e73ebe2bb3ae70537e15e347820b4de5dc27"},
- {file = "safetensors-0.4.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5daa05058f7dce85b5f9f60c4eab483ed7859d63978f08a76e52e78859ff20ca"},
- {file = "safetensors-0.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a86565a5c112dd855909e20144947b4f53abb78c4de207f36ca71ee63ba5b90d"},
- {file = "safetensors-0.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:38032078ed9fea52d06584e441bccc73fb475c4581600c6d6166de2fe2deb3d1"},
- {file = "safetensors-0.4.0-cp311-none-win32.whl", hash = "sha256:2f99d90c91b7c76b40a862acd9085bc77f7974a27dee7cfcebe46149af5a99a1"},
- {file = "safetensors-0.4.0-cp311-none-win_amd64.whl", hash = "sha256:74e2a448ffe19be188b457b130168190ee73b5a75e45ba96796320c1f5ae35d2"},
- {file = "safetensors-0.4.0-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:1e2f9c69b41d03b4826ffb96b29e07444bb6b34a78a7bafd0b88d59e8ec75b8a"},
- {file = "safetensors-0.4.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3910fb5bf747413b59f1a34e6d2a993b589fa7d919709518823c70efaaa350bd"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf8fdca709b2470a35a59b1e6dffea75cbe1214b22612b5dd4c93947697aea8b"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2f27b8ef814c5fb43456caeb7f3cbb889b76115180aad1f42402839c14a47c5b"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7b2d6101eccc43c7be0cb052f13ceda64288b3d8b344b988ed08d7133cbce2f3"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fdc34027b545a69be3d4220c140b276129523e4e46db06ad1a0b60d6a4cf9214"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db7bb48ca9e90bb9526c71b388d38d8de160c0354f4c5126df23e8701a870dcb"},
- {file = "safetensors-0.4.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a78ffc0795d3595cd9e4d453502e35f764276c49e434b25556a15a337db4dafc"},
- {file = "safetensors-0.4.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:8e735b0f79090f6855b55e205e820b7b595502ffca0009a5c13eef3661ce465b"},
- {file = "safetensors-0.4.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:f8d2416734e850d5392afffbcb2b8985ea29fb171f1cb197e2ae51b8e35d6438"},
- {file = "safetensors-0.4.0-cp37-cp37m-macosx_10_7_x86_64.whl", hash = "sha256:e853e189ba7d47eaf561094586692ba2bbdd258c096f1755805cac098de0e6ab"},
- {file = "safetensors-0.4.0-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:4b2aa57b5a4d576f3d1dd6e56980026340f156f8a13c13016bfac4e25295b53f"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3b6c1316ffde6cb4bf22c7445bc9fd224b4d1b9dd7320695f5611c89e802e4b6"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:003077ec85261d00061058fa12e3c1d2055366b02ce8f2938929359ffbaff2b8"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:bd63d83a92f1437a8b0431779320376030ae43ace980bea5686d515de0784100"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2077801800b4b13301d8d6290c7fb5bd60737320001717153ebc4371776643b5"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7abe0e157a49a75aeeccfbc4f3dac38d8f98512d3cdb35c200f8e628dc5773cf"},
- {file = "safetensors-0.4.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3bfed574f6b1e7e7fe1f17213278875ef6c6e8b1582ab6eda93947db1178cae6"},
- {file = "safetensors-0.4.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:964ef166a286ce3b023d0d0bd0e21d440a1c8028981c8abdb136bc7872ba9b3d"},
- {file = "safetensors-0.4.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:44f84373e42183bd56a13a1f2d8acb1db7fedaeffbd83e79cec861477eee1af4"},
- {file = "safetensors-0.4.0-cp37-none-win32.whl", hash = "sha256:c68132727dd86fb641102e494d445f705efe402f4d5e24b278183a15499ab400"},
- {file = "safetensors-0.4.0-cp37-none-win_amd64.whl", hash = "sha256:1db87155454c168aef118d5657a403aee48a4cb08d8851a981157f07351ea317"},
- {file = "safetensors-0.4.0-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:9e583fa68e5a07cc859c4e13c1ebff12029904aa2e27185cf04a1f57fe9a81c4"},
- {file = "safetensors-0.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:73e7696dcf3f72f99545eb1abe6106ad65ff1f62381d6ce4b34be3272552897a"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4936096a57c62e84e200f92620a536be067fc5effe46ecc7f230ebb496ecd579"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:87b328ee1591adac332543e1f5fc2c2d7f149b745ebb0d58d7850818ff9cee27"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b69554c143336256260eceff1d3c0969172a641b54d4668489a711b05f92a2c0"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3ebf6bcece5d5d1bd6416472f94604d2c834ca752ac60ed42dba7157e595a990"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6686ce01b8602d55a7d9903c90d4a6e6f90aeb6ddced7cf4605892d0ba94bcb8"},
- {file = "safetensors-0.4.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9b8fd6cc2f3bda444a048b541c843c7b7fefc89c4120d7898ea7d5b026e93891"},
- {file = "safetensors-0.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8a6abfe67692f81b8bdb99c837f28351c17e624ebf136970c850ee989c720446"},
- {file = "safetensors-0.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:27a24ca8822c469ee452db4c13418ba983315a0d863c018a9af15f2305eac38c"},
- {file = "safetensors-0.4.0-cp38-none-win32.whl", hash = "sha256:c4a0a47c8640167792d8261ee21b26430bbc39130a7edaad7f4c0bc05669d00e"},
- {file = "safetensors-0.4.0-cp38-none-win_amd64.whl", hash = "sha256:a738970a367f39249e2abb900d9441a8a86d7ff50083e5eaa6e7760a9f216014"},
- {file = "safetensors-0.4.0-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:806379f37e1abd5d302288c4b2f4186dd7ea7143d4c7811f90a8077f0ae8967b"},
- {file = "safetensors-0.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2b9b94133ed2ae9dda0e95dcace7b7556eba023ffa4c4ae6df8f99377f571d6a"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b563a14c43614815a6b524d2e4edeaace50b717f7e7487bb227dd5b68350f5a"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:00a9b157be660fb7ba88fa2eedd05ec93793a5b61e43e783e10cb0b995372802"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c8f194f45ab6aa767993c24f0aeb950af169dbc5d611b94c9021a1d13b8a1a34"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:469360b9451db10bfed3881378d5a71b347ecb1ab4f42367d77b8164a13af70b"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f5f75fa97ccf32a3c7af476c6a0e851023197d3c078f6de3612008fff94735f9"},
- {file = "safetensors-0.4.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:acf0180283c2efae72f1d8c0a4a7974662091df01be3aa43b5237b1e52ed0a01"},
- {file = "safetensors-0.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:cd02b495ba0814619f40bda46771bb06dbbf1d42524b66fa03b2a736c77e4515"},
- {file = "safetensors-0.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c42bdea183dbaa99e2f0e6120dc524df79cf4289a6f90f30a534444ef20f49fa"},
- {file = "safetensors-0.4.0-cp39-none-win32.whl", hash = "sha256:cef7bb5d9feae7146c3c3c7b3aef7d2c8b39ba7f5ff4252d368eb69462a47076"},
- {file = "safetensors-0.4.0-cp39-none-win_amd64.whl", hash = "sha256:79dd46fb1f19282fd12f544471efb97823ede927cedbf9cf35550d92b349fdd2"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:002301c1afa32909f83745b0c124d002e7ae07e15671f3b43cbebd0ffc5e6037"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:67762d36ae088c73d4a3c96bfc4ea8d31233554f35b6cace3a18533238d462ea"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0f45230f20a206e5e4c7f7bbf9342178410c6f8b0af889843aa99045a76f7691"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f2ca939bbd8fb2f4dfa28e39a146dad03bc9325e9fc831b68f7b98f69a5a2f1"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:61a00f281391fae5ce91df70918bb61c12d2d514a493fd8056e12114be729911"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:435fd136a42492b280cb55126f9ce9535b35dd49df2c5d572a5945455a439448"},
- {file = "safetensors-0.4.0-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:f0daa788273d683258fb1e4a5e16bef4486b2fca536451a2591bc0f4a6488895"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-macosx_10_7_x86_64.whl", hash = "sha256:0620ab0d41e390ccb1c4ea8f63dc00cb5f0b96a5cdd3cd0d64c21765720c074a"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bc1fa8d067733cb67f22926689ee808f08afacf7700d2ffb44efae90a0693eb1"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dcaa40bc363edda145db75cd030f3b1822e5478d550c3500a42502ecef32c959"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:b561fbc044db7beff2ece0ec219a291809d45a38d30c6b38e7cc46482582f4ba"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:79a983b09782dacf9a1adb19bb98f4a8f6c3144108939f572c047b5797e43cf5"},
- {file = "safetensors-0.4.0-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:10b65cd3ad79f5d0daf281523b4146bc271a34bb7430d4e03212e0de8622dab8"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:114decacc475a6a9e2f9102a00c171d113ddb5d35cb0bda0db2c0c82b2eaa9ce"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:72ddb741dd5fe42521db76a70e012f76995516a12e7e0ef26be03ea9be77802a"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c5556c2ec75f5a6134866eddd7341cb36062e6edaea343478a279591b63ddba"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ed50f239b0ce7ae85b078395593b4a351ede7e6f73af25f4873e3392336f64c9"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:495dcaea8fbab70b927d2274e2547824462737acbf98ccd851a71124f779a5c6"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3f4d90c79a65ba2fe2ff0876f6140748f0a3ce6a21e27a35190f4f96321803f8"},
- {file = "safetensors-0.4.0-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:7a524382b5c55b5fbb168e0e9d3f502450c8cf3fb81b93e880018437c206a482"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:9849ea60c7e840bfdd6030ad454d4a6ba837b3398c902f15a30460dd6961c28c"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:6c42623ae7045615d9eaa6877b9df1db4e9cc71ecc14bcc721ea1e475dddd595"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80cb8342f00f3c41b3b93b1a599b84723280d3ac90829bc62262efc03ab28793"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d8c4f5ed4ede384dea8c99bae76b0718a828dbf7b2c8ced1f44e3b9b1a124475"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:40d7cf03493bfe75ef62e2c716314474b28d9ba5bf4909763e4b8dd14330c01a"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:232029f0a9fa6fa1f737324eda98a700409811186888536a2333cbbf64e41741"},
- {file = "safetensors-0.4.0-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:9ed55f4a20c78ff3e8477efb63c8303c2152cdfb3bfea4d025a80f54d38fd628"},
- {file = "safetensors-0.4.0.tar.gz", hash = "sha256:b985953c3cf11e942eac4317ef3db3da713e274109cf7cfb6076d877054f013e"},
+ {file = "safetensors-0.4.1-cp310-cp310-macosx_10_7_x86_64.whl", hash = "sha256:cba01c6b76e01ec453933b3b3c0157c59b52881c83eaa0f7666244e71aa75fd1"},
+ {file = "safetensors-0.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7a8f6f679d97ea0135c7935c202feefbd042c149aa70ee759855e890c01c7814"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbc2ce1f5ae5143a7fb72b71fa71db6a42b4f6cf912aa3acdc6b914084778e68"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2d87d993eaefe6611a9c241a8bd364a5f1ffed5771c74840363a6c4ed8d868f6"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:097e9af2efa8778cd2f0cba451784253e62fa7cc9fc73c0744d27212f7294e25"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d10a9f7bae608ccfdc009351f01dc3d8535ff57f9488a58a4c38e45bf954fe93"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:270b99885ec14abfd56c1d7f28ada81740a9220b4bae960c3de1c6fe84af9e4d"},
+ {file = "safetensors-0.4.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:285b52a481e7ba93e29ad4ec5841ef2c4479ef0a6c633c4e2629e0508453577b"},
+ {file = "safetensors-0.4.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:c3c9f0ca510e0de95abd6424789dcbc879942a3a4e29b0dfa99d9427bf1da75c"},
+ {file = "safetensors-0.4.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:88b4653059c903015284a9722f9a46838c654257173b279c8f6f46dbe80b612d"},
+ {file = "safetensors-0.4.1-cp310-none-win32.whl", hash = "sha256:2fe6926110e3d425c4b684a4379b7796fdc26ad7d16922ea1696c8e6ea7e920f"},
+ {file = "safetensors-0.4.1-cp310-none-win_amd64.whl", hash = "sha256:a79e16222106b2f5edbca1b8185661477d8971b659a3c814cc6f15181a9b34c8"},
+ {file = "safetensors-0.4.1-cp311-cp311-macosx_10_7_x86_64.whl", hash = "sha256:d93321eea0dd7e81b283e47a1d20dee6069165cc158286316d0d06d340de8fe8"},
+ {file = "safetensors-0.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8ff8e41c8037db17de0ea2a23bc684f43eaf623be7d34906fe1ac10985b8365e"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:39d36f1d88468a87c437a1bc27c502e71b6ca44c385a9117a9f9ba03a75cc9c6"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:7ef010e9afcb4057fb6be3d0a0cfa07aac04fe97ef73fe4a23138d8522ba7c17"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b287304f2b2220d51ccb51fd857761e78bcffbeabe7b0238f8dc36f2edfd9542"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e09000b2599e1836314430f81a3884c66a5cbabdff5d9f175b5d560d4de38d78"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e9c80ce0001efa16066358d2dd77993adc25f5a6c61850e4ad096a2232930bce"},
+ {file = "safetensors-0.4.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:413e1f6ac248f7d1b755199a06635e70c3515493d3b41ba46063dec33aa2ebb7"},
+ {file = "safetensors-0.4.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:d3ac139377cfe71ba04573f1cda66e663b7c3e95be850e9e6c2dd4b5984bd513"},
+ {file = "safetensors-0.4.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:04157d008385bea66d12fe90844a80d4a76dc25ec5230b5bd9a630496d1b7c03"},
+ {file = "safetensors-0.4.1-cp311-none-win32.whl", hash = "sha256:5f25297148ec665f0deb8bd67e9564634d8d6841041ab5393ccfe203379ea88b"},
+ {file = "safetensors-0.4.1-cp311-none-win_amd64.whl", hash = "sha256:b2f8877990a72ff595507b80f4b69036a9a1986a641f8681adf3425d97d3d2a5"},
+ {file = "safetensors-0.4.1-cp312-cp312-macosx_10_7_x86_64.whl", hash = "sha256:eb2c1da1cc39509d1a55620a5f4d14f8911c47a89c926a96e6f4876e864375a3"},
+ {file = "safetensors-0.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:303d2c0415cf15a28f8d7f17379ea3c34c2b466119118a34edd9965983a1a8a6"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bb4cb3e37a9b961ddd68e873b29fe9ab4a081e3703412e34aedd2b7a8e9cafd9"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:ae5497adc68669db2fed7cb2dad81e6a6106e79c9a132da3efdb6af1db1014fa"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b30abd0cddfe959d1daedf92edcd1b445521ebf7ddefc20860ed01486b33c90"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d784a98c492c751f228a4a894c3b8a092ff08b24e73b5568938c28b8c0e8f8df"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e57a5ab08b0ec7a7caf30d2ac79bb30c89168431aca4f8854464bb9461686925"},
+ {file = "safetensors-0.4.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:edcf3121890b5f0616aa5a54683b1a5d2332037b970e507d6bb7841a3a596556"},
+ {file = "safetensors-0.4.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:fdb58dee173ef33634c3016c459d671ca12d11e6acf9db008261cbe58107e579"},
+ {file = "safetensors-0.4.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:780dc21eb3fd32ddd0e8c904bdb0290f2454f4ac21ae71e94f9ce72db1900a5a"},
+ {file = "safetensors-0.4.1-cp37-cp37m-macosx_10_7_x86_64.whl", hash = "sha256:48901bd540f8a3c1791314bc5c8a170927bf7f6acddb75bf0a263d081a3637d4"},
+ {file = "safetensors-0.4.1-cp37-cp37m-macosx_11_0_arm64.whl", hash = "sha256:3b0b7b2d5976fbed8a05e2bbdce5816a59e6902e9e7c7e07dc723637ed539787"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f69903ff49cb30b9227fb5d029bea276ea20d04b06803877a420c5b1b74c689"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:0ddd050e01f3e843aa8c1c27bf68675b8a08e385d0045487af4d70418c3cb356"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a82bc2bd7a9a0e08239bdd6d7774d64121f136add93dfa344a2f1a6d7ef35fa"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6ace9e66a40f98a216ad661245782483cf79cf56eb2b112650bb904b0baa9db5"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:82cbb8f4d022f2e94498cbefca900698b8ded3d4f85212f47da614001ff06652"},
+ {file = "safetensors-0.4.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:791edc10a3c359a2f5f52d5cddab0df8a45107d91027d86c3d44e57162e5d934"},
+ {file = "safetensors-0.4.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:83c2cfbe8c6304f0891e7bb378d56f66d2148972eeb5f747cd8a2246886f0d8c"},
+ {file = "safetensors-0.4.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:04dd14f53f5500eb4c4149674216ba1000670efbcf4b1b5c2643eb244e7882ea"},
+ {file = "safetensors-0.4.1-cp37-none-win32.whl", hash = "sha256:d5b3defa74f3723a388bfde2f5d488742bc4879682bd93267c09a3bcdf8f869b"},
+ {file = "safetensors-0.4.1-cp37-none-win_amd64.whl", hash = "sha256:25a043cbb59d4f75e9dd87fdf5c009dd8830105a2c57ace49b72167dd9808111"},
+ {file = "safetensors-0.4.1-cp38-cp38-macosx_10_7_x86_64.whl", hash = "sha256:3f6a520af7f2717c5ecba112041f2c8af1ca6480b97bf957aba81ed9642e654c"},
+ {file = "safetensors-0.4.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c3807ac3b16288dffebb3474b555b56fe466baa677dfc16290dcd02dca1ab228"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8b58ba13a9e82b4bc3fc221914f6ef237fe6c2adb13cede3ace64d1aacf49610"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dac4bb42f8679aadc59bd91a4c5a1784a758ad49d0912995945cd674089f628e"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:911b48dc09e321a194def3a7431662ff4f03646832f3a8915bbf0f449b8a5fcb"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:82571d20288c975c1b30b08deb9b1c3550f36b31191e1e81fae87669a92217d0"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:da52ee0dc8ba03348ffceab767bd8230842fdf78f8a996e2a16445747143a778"},
+ {file = "safetensors-0.4.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2536b11ce665834201072e9397404170f93f3be10cca9995b909f023a04501ee"},
+ {file = "safetensors-0.4.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:998fbac99ca956c3a09fe07cc0b35fac26a521fa8865a690686d889f0ff4e4a6"},
+ {file = "safetensors-0.4.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:845be0aafabf2a60c2d482d4e93023fecffe5e5443d801d7a7741bae9de41233"},
+ {file = "safetensors-0.4.1-cp38-none-win32.whl", hash = "sha256:ce7a28bc8af685a69d7e869d09d3e180a275e3281e29cf5f1c7319e231932cc7"},
+ {file = "safetensors-0.4.1-cp38-none-win_amd64.whl", hash = "sha256:e056fb9e22d118cc546107f97dc28b449d88274207dd28872bd668c86216e4f6"},
+ {file = "safetensors-0.4.1-cp39-cp39-macosx_10_7_x86_64.whl", hash = "sha256:bdc0d039e44a727824639824090bd8869535f729878fa248addd3dc01db30eae"},
+ {file = "safetensors-0.4.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:3c1b1d510c7aba71504ece87bf393ea82638df56303e371e5e2cf09d18977dd7"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0bd0afd95c1e497f520e680ea01e0397c0868a3a3030e128438cf6e9e3fcd671"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:f603bdd8deac6726d39f41688ed353c532dd53935234405d79e9eb53f152fbfb"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d8a85e3e47e0d4eebfaf9a58b40aa94f977a56050cb5598ad5396a9ee7c087c6"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e0ccb5aa0f3be2727117e5631200fbb3a5b3a2b3757545a92647d6dd8be6658f"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d784938534e255473155e4d9f276ee69eb85455b6af1292172c731409bf9adee"},
+ {file = "safetensors-0.4.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a257de175c254d39ccd6a21341cd62eb7373b05c1e618a78096a56a857e0c316"},
+ {file = "safetensors-0.4.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6fd80f7794554091836d4d613d33a7d006e2b8d6ba014d06f97cebdfda744f64"},
+ {file = "safetensors-0.4.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:35803201d980efcf964b75a0a2aee97fe5e9ecc5f3ad676b38fafdfe98e0620d"},
+ {file = "safetensors-0.4.1-cp39-none-win32.whl", hash = "sha256:7ff8a36e0396776d3ed9a106fc9a9d7c55d4439ca9a056a24bf66d343041d3e6"},
+ {file = "safetensors-0.4.1-cp39-none-win_amd64.whl", hash = "sha256:bfa2e20342b81921b98edba52f8deb68843fa9c95250739a56b52ceda5ea5c61"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-macosx_10_7_x86_64.whl", hash = "sha256:ae2d5a31cfb8a973a318f7c4d2cffe0bd1fe753cdf7bb41a1939d45a0a06f964"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:1a45dbf03e8334d3a5dc93687d98b6dc422f5d04c7d519dac09b84a3c87dd7c6"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2297b359d91126c0f9d4fd17bae3cfa2fe3a048a6971b8db07db746ad92f850c"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bda3d98e2bcece388232cfc551ebf063b55bdb98f65ab54df397da30efc7dcc5"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:f8934bdfd202ebd0697040a3dff40dd77bc4c5bbf3527ede0532f5e7fb4d970f"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:42c3710cec7e5c764c7999697516370bee39067de0aa089b7e2cfb97ac8c6b20"},
+ {file = "safetensors-0.4.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:53134226053e56bd56e73f7db42596e7908ed79f3c9a1016e4c1dade593ac8e5"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-macosx_10_7_x86_64.whl", hash = "sha256:257d59e40a1b367cb544122e7451243d65b33c3f34d822a347f4eea6fdf97fdf"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d54c2f1826e790d1eb2d2512bfd0ee443f0206b423d6f27095057c7f18a0687"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:645b3f1138fce6e818e79d4128afa28f0657430764cc045419c1d069ff93f732"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e9a7ffb1e551c6df51d267f5a751f042b183df22690f6feceac8d27364fd51d7"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:44e230fbbe120de564b64f63ef3a8e6ff02840fa02849d9c443d56252a1646d4"},
+ {file = "safetensors-0.4.1-pp37-pypy37_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:9d16b3b2fcc6fca012c74bd01b5619c655194d3e3c13e4d4d0e446eefa39a463"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-macosx_10_7_x86_64.whl", hash = "sha256:5d95ea4d8b32233910734a904123bdd3979c137c461b905a5ed32511defc075f"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-macosx_11_0_arm64.whl", hash = "sha256:dab431699b5d45e0ca043bc580651ce9583dda594e62e245b7497adb32e99809"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:16d8bbb7344e39cb9d4762e85c21df94ebeb03edac923dd94bb9ed8c10eac070"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1faf5111c66a6ba91f85dff2e36edaaf36e6966172703159daeef330de4ddc7b"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:660ca1d8bff6c7bc7c6b30b9b32df74ef3ab668f5df42cefd7588f0d40feadcb"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:ae2f67f04ed0bb2e56fd380a8bd3eef03f609df53f88b6f5c7e89c08e52aae00"},
+ {file = "safetensors-0.4.1-pp38-pypy38_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:c8ed5d2c04cdc1afc6b3c28d59580448ac07732c50d94c15e14670f9c473a2ce"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-macosx_10_7_x86_64.whl", hash = "sha256:2b6a2814278b6660261aa9a9aae524616de9f1ec364e3716d219b6ed8f91801f"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:3cfd1ca35eacc635f0eaa894e5c5ed83ffebd0f95cac298fd430014fa7323631"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4177b456c6b0c722d82429127b5beebdaf07149d265748e97e0a34ff0b3694c8"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:313e8472197bde54e3ec54a62df184c414582979da8f3916981b6a7954910a1b"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:fdb4adb76e21bad318210310590de61c9f4adcef77ee49b4a234f9dc48867869"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:1d568628e9c43ca15eb96c217da73737c9ccb07520fafd8a1eba3f2750614105"},
+ {file = "safetensors-0.4.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:573b6023a55a2f28085fc0a84e196c779b6cbef4d9e73acea14c8094fee7686f"},
+ {file = "safetensors-0.4.1.tar.gz", hash = "sha256:2304658e6ada81a5223225b4efe84748e760c46079bffedf7e321763cafb36c9"},
]
[package.extras]
@@ -3556,13 +3576,13 @@ files = [
[[package]]
name = "sentry-sdk"
-version = "1.36.0"
+version = "1.38.0"
description = "Python client for Sentry (https://sentry.io)"
optional = false
python-versions = "*"
files = [
- {file = "sentry-sdk-1.36.0.tar.gz", hash = "sha256:f32dd16547f2f45e1c71a96fd4a48925e629541f7ddfe3d5d25ef7d5e94eb3c8"},
- {file = "sentry_sdk-1.36.0-py2.py3-none-any.whl", hash = "sha256:25d574f94fdf72199e331c2401fdac60d01b5be8f32822174c51c3ff0fc2f8cb"},
+ {file = "sentry-sdk-1.38.0.tar.gz", hash = "sha256:8feab81de6bbf64f53279b085bd3820e3e737403b0a0d9317f73a2c3374ae359"},
+ {file = "sentry_sdk-1.38.0-py2.py3-none-any.whl", hash = "sha256:0017fa73b8ae2d4e57fd2522ee3df30453715b29d2692142793ec5d5f90b94a6"},
]
[package.dependencies]
@@ -3945,22 +3965,22 @@ doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "tensorboard"
-version = "2.15.1"
+version = "2.14.1"
description = "TensorBoard lets you watch Tensors Flow"
optional = false
python-versions = ">=3.9"
files = [
- {file = "tensorboard-2.15.1-py3-none-any.whl", hash = "sha256:c46c1d1cf13a458c429868a78b2531d8ff5f682058d69ec0840b0bc7a38f1c0f"},
+ {file = "tensorboard-2.14.1-py3-none-any.whl", hash = "sha256:3db108fb58f023b6439880e177743c5f1e703e9eeb5fb7d597871f949f85fd58"},
]
[package.dependencies]
absl-py = ">=0.4"
google-auth = ">=1.6.3,<3"
-google-auth-oauthlib = ">=0.5,<2"
+google-auth-oauthlib = ">=0.5,<1.1"
grpcio = ">=1.48.2"
markdown = ">=2.6.8"
numpy = ">=1.12.0"
-protobuf = ">=3.19.6,<4.24"
+protobuf = ">=3.19.6"
requests = ">=2.21.0,<3"
setuptools = ">=41.0.0"
six = ">1.9"
@@ -3981,26 +4001,26 @@ files = [
[[package]]
name = "tensorflow"
-version = "2.15.0"
+version = "2.14.1"
description = "TensorFlow is an open source machine learning framework for everyone."
optional = false
python-versions = ">=3.9"
files = [
- {file = "tensorflow-2.15.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:9b248e0f4316b3a3c54cd1f83edfb7a761d473060c1972a8ea31a90d5de3aa72"},
- {file = "tensorflow-2.15.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:eaf420d8b8ec1d4bd75859be7d7545d8e7052726eed8456fdbba63718e7e07ea"},
- {file = "tensorflow-2.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e98aab454fc73ff1900314821e5bafbf20840ada2004c8caccf4d92e0e12a628"},
- {file = "tensorflow-2.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ed601b43df9b7d9bed0203b34bcb9356efd4f671eaaac1046b7166a2afee0cf8"},
- {file = "tensorflow-2.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:2d88f8b71f4a8d9ab9dc7c8e42b14ca0f53d1daab0f989b8f2918907c2891f41"},
- {file = "tensorflow-2.15.0-cp311-cp311-macosx_10_15_x86_64.whl", hash = "sha256:1e0716622ed7af867d8b1997b00a2940f1a1587dee923ff53efa2ee506992f32"},
- {file = "tensorflow-2.15.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:124930e7d4f5d74c61a5c80d642a26c22fe0c42fdd383fe9ee5803c3ac9ed4ce"},
- {file = "tensorflow-2.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:852efeb4d18beedac0120c4f2d4f4dccf4c090bb6740c5199d395ff609e85e98"},
- {file = "tensorflow-2.15.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dee8ec2b2c6c942ae65d25746e53cdc475e82d5fcbbb3009ce47f5963d69ebfc"},
- {file = "tensorflow-2.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:e05a48006930e4e9e68468e7affed3bbce8a1c7fe6df86500496ad1558804a78"},
- {file = "tensorflow-2.15.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:2cfcdde1ff3c01be617e99ce9783c49cb11da5796ce32a31855412bd092c0bcf"},
- {file = "tensorflow-2.15.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:896bda03f722700a9918d144aee5152a75f1be5e6c5045fd0683b8318a3fc9d9"},
- {file = "tensorflow-2.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e7697b005ce48fec8b2ee8cf25bcbd138f16b5e17f99f7c01a6ea3f2429f86c6"},
- {file = "tensorflow-2.15.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3fa865956d96b7614f247c36e4c22b1543ba5ce656fbe8e4f6266ae7a4917132"},
- {file = "tensorflow-2.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:01108746e1bbfcd48dfabf7f51ddca7693b91ea6821f6f62a27b5a5ebf0817c5"},
+ {file = "tensorflow-2.14.1-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:f6e9ac1e53db30f1759148f731f87b9d12da5ce0f153fc49406824efd486aae7"},
+ {file = "tensorflow-2.14.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:7156bf1f7311dada7dba5345b526a38e6f4e4f4b8509bee162a24342bf6571b2"},
+ {file = "tensorflow-2.14.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5781aadad5b46e2de4e373b0ca15a852b90d58982270a6db02ec52e4986316d"},
+ {file = "tensorflow-2.14.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9a955c42164eff4d751732c1274ca4bf059db60c9e2362098ce1eed7177c3fe9"},
+ {file = "tensorflow-2.14.1-cp310-cp310-win_amd64.whl", hash = "sha256:4be5f4327a6e854f64b4dcfd08a51c5fc7cc3fea8c76c5bf5c0c3deb002d5221"},
+ {file = "tensorflow-2.14.1-cp311-cp311-macosx_10_15_x86_64.whl", hash = "sha256:597dd6665a91b3d4b881f0d40277eb55b65b04567553206a46e7db9cfa067310"},
+ {file = "tensorflow-2.14.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:9833e61423ad2726f81e3fc770558b81d5f0a454bdb2dad717c5474ea837ce91"},
+ {file = "tensorflow-2.14.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:14a48a087954722d9e73086e8ce28a14b1f9f889ea5845c7c0bf30d8747ab6e2"},
+ {file = "tensorflow-2.14.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c9aa05a98450fa5bc4efd529383b7d15c10ec12b0238a6744baa1508c4bfa4d5"},
+ {file = "tensorflow-2.14.1-cp311-cp311-win_amd64.whl", hash = "sha256:11958d12e39d44a9f5fc753fc312dd1726a8506f2d2606e01421ca4ee9dc5c55"},
+ {file = "tensorflow-2.14.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:d95404f78a8d5e3d2481383dbe2d2286341ccf9bc5cbb19d857c646494d860c6"},
+ {file = "tensorflow-2.14.1-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:511c4c5bfb2af17c6ca22663f98a7267c4386bf5486fbe78ee2d21482a6fa822"},
+ {file = "tensorflow-2.14.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f66d2990157cf27f80c730878cb8befa8ed9716223494037d31c80fbe5f64370"},
+ {file = "tensorflow-2.14.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9ab2747f75aba0327bfe6092b963694f1001781e5d2c0d251dfeed02b0c3bba"},
+ {file = "tensorflow-2.14.1-cp39-cp39-win_amd64.whl", hash = "sha256:7f5c9215bc00ba88f1cde1399f8160a5cb865c20ad71a1d5a6869f9fad62d9a5"},
]
[package.dependencies]
@@ -4011,33 +4031,33 @@ gast = ">=0.2.1,<0.5.0 || >0.5.0,<0.5.1 || >0.5.1,<0.5.2 || >0.5.2"
google-pasta = ">=0.1.1"
grpcio = ">=1.24.3,<2.0"
h5py = ">=2.9.0"
-keras = ">=2.15.0,<2.16"
+keras = ">=2.14.0,<2.15"
libclang = ">=13.0.0"
-ml-dtypes = ">=0.2.0,<0.3.0"
+ml-dtypes = "0.2.0"
numpy = ">=1.23.5,<2.0.0"
opt-einsum = ">=2.3.2"
packaging = "*"
protobuf = ">=3.20.3,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<5.0.0dev"
setuptools = "*"
six = ">=1.12.0"
-tensorboard = ">=2.15,<2.16"
-tensorflow-estimator = ">=2.15.0,<2.16"
+tensorboard = ">=2.14,<2.15"
+tensorflow-estimator = ">=2.14.0,<2.15"
tensorflow-io-gcs-filesystem = ">=0.23.1"
termcolor = ">=1.1.0"
typing-extensions = ">=3.6.6"
wrapt = ">=1.11.0,<1.15"
[package.extras]
-and-cuda = ["nvidia-cublas-cu12 (==12.2.5.6)", "nvidia-cuda-cupti-cu12 (==12.2.142)", "nvidia-cuda-nvcc-cu12 (==12.2.140)", "nvidia-cuda-nvrtc-cu12 (==12.2.140)", "nvidia-cuda-runtime-cu12 (==12.2.140)", "nvidia-cudnn-cu12 (==8.9.4.25)", "nvidia-cufft-cu12 (==11.0.8.103)", "nvidia-curand-cu12 (==10.3.3.141)", "nvidia-cusolver-cu12 (==11.5.2.141)", "nvidia-cusparse-cu12 (==12.1.2.141)", "nvidia-nccl-cu12 (==2.16.5)", "nvidia-nvjitlink-cu12 (==12.2.140)", "tensorrt (==8.6.1.post1)", "tensorrt-bindings (==8.6.1)", "tensorrt-libs (==8.6.1)"]
+and-cuda = ["nvidia-cublas-cu11 (==11.11.3.6)", "nvidia-cuda-cupti-cu11 (==11.8.87)", "nvidia-cuda-nvcc-cu11 (==11.8.89)", "nvidia-cuda-runtime-cu11 (==11.8.89)", "nvidia-cudnn-cu11 (==8.7.0.84)", "nvidia-cufft-cu11 (==10.9.0.58)", "nvidia-curand-cu11 (==10.3.0.86)", "nvidia-cusolver-cu11 (==11.4.1.48)", "nvidia-cusparse-cu11 (==11.7.5.86)", "nvidia-nccl-cu11 (==2.16.5)", "tensorrt (==8.5.3.1)"]
[[package]]
name = "tensorflow-estimator"
-version = "2.15.0"
+version = "2.14.0"
description = "TensorFlow Estimator."
optional = false
python-versions = ">=3.7"
files = [
- {file = "tensorflow_estimator-2.15.0-py2.py3-none-any.whl", hash = "sha256:aedf21eec7fb2dc91150fc91a1ce12bc44dbb72278a08b58e79ff87c9e28f153"},
+ {file = "tensorflow_estimator-2.14.0-py2.py3-none-any.whl", hash = "sha256:820bf57c24aa631abb1bbe4371739ed77edb11361d61381fd8e790115ac0fd57"},
]
[[package]]
@@ -4077,13 +4097,13 @@ tensorflow-rocm = ["tensorflow-rocm (>=2.13.0,<2.14.0)"]
[[package]]
name = "termcolor"
-version = "2.3.0"
+version = "2.4.0"
description = "ANSI color formatting for output in terminal"
optional = false
-python-versions = ">=3.7"
+python-versions = ">=3.8"
files = [
- {file = "termcolor-2.3.0-py3-none-any.whl", hash = "sha256:3afb05607b89aed0ffe25202399ee0867ad4d3cb4180d98aaf8eefa6a5f7d475"},
- {file = "termcolor-2.3.0.tar.gz", hash = "sha256:b5b08f68937f138fe92f6c089b99f1e2da0ae56c52b78bf7075fd95420fd9a5a"},
+ {file = "termcolor-2.4.0-py3-none-any.whl", hash = "sha256:9297c0df9c99445c2412e832e882a7884038a25617c60cea2ad69488d4040d63"},
+ {file = "termcolor-2.4.0.tar.gz", hash = "sha256:aab9e56047c8ac41ed798fa36d892a37aca6b3e9159f3e0c24bc64a9b3ac7b7a"},
]
[package.extras]
@@ -4183,40 +4203,47 @@ files = [
[[package]]
name = "tiktoken"
-version = "0.5.1"
+version = "0.5.2"
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
optional = false
python-versions = ">=3.8"
files = [
- {file = "tiktoken-0.5.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:2b0bae3fd56de1c0a5874fb6577667a3c75bf231a6cef599338820210c16e40a"},
- {file = "tiktoken-0.5.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:e529578d017045e2f0ed12d2e00e7e99f780f477234da4aae799ec4afca89f37"},
- {file = "tiktoken-0.5.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:edd2ffbb789712d83fee19ab009949f998a35c51ad9f9beb39109357416344ff"},
- {file = "tiktoken-0.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4c73d47bdc1a3f1f66ffa019af0386c48effdc6e8797e5e76875f6388ff72e9"},
- {file = "tiktoken-0.5.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:46b8554b9f351561b1989157c6bb54462056f3d44e43aa4e671367c5d62535fc"},
- {file = "tiktoken-0.5.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:92ed3bbf71a175a6a4e5fbfcdb2c422bdd72d9b20407e00f435cf22a68b4ea9b"},
- {file = "tiktoken-0.5.1-cp310-cp310-win_amd64.whl", hash = "sha256:714efb2f4a082635d9f5afe0bf7e62989b72b65ac52f004eb7ac939f506c03a4"},
- {file = "tiktoken-0.5.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a10488d1d1a5f9c9d2b2052fdb4cf807bba545818cb1ef724a7f5d44d9f7c3d4"},
- {file = "tiktoken-0.5.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8079ac065572fe0e7c696dbd63e1fdc12ce4cdca9933935d038689d4732451df"},
- {file = "tiktoken-0.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7ef730db4097f5b13df8d960f7fdda2744fe21d203ea2bb80c120bb58661b155"},
- {file = "tiktoken-0.5.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:426e7def5f3f23645dada816be119fa61e587dfb4755de250e136b47a045c365"},
- {file = "tiktoken-0.5.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:323cec0031358bc09aa965c2c5c1f9f59baf76e5b17e62dcc06d1bb9bc3a3c7c"},
- {file = "tiktoken-0.5.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5abd9436f02e2c8eda5cce2ff8015ce91f33e782a7423de2a1859f772928f714"},
- {file = "tiktoken-0.5.1-cp311-cp311-win_amd64.whl", hash = "sha256:1fe99953b63aabc0c9536fbc91c3c9000d78e4755edc28cc2e10825372046a2d"},
- {file = "tiktoken-0.5.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dcdc630461927718b317e6f8be7707bd0fc768cee1fdc78ddaa1e93f4dc6b2b1"},
- {file = "tiktoken-0.5.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1f2b3b253e22322b7f53a111e1f6d7ecfa199b4f08f3efdeb0480f4033b5cdc6"},
- {file = "tiktoken-0.5.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:43ce0199f315776dec3ea7bf86f35df86d24b6fcde1babd3e53c38f17352442f"},
- {file = "tiktoken-0.5.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a84657c083d458593c0235926b5c993eec0b586a2508d6a2020556e5347c2f0d"},
- {file = "tiktoken-0.5.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:c008375c0f3d97c36e81725308699116cd5804fdac0f9b7afc732056329d2790"},
- {file = "tiktoken-0.5.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:779c4dea5edd1d3178734d144d32231e0b814976bec1ec09636d1003ffe4725f"},
- {file = "tiktoken-0.5.1-cp38-cp38-win_amd64.whl", hash = "sha256:b5dcfcf9bfb798e86fbce76d40a1d5d9e3f92131aecfa3d1e5c9ea1a20f1ef1a"},
- {file = "tiktoken-0.5.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9b180a22db0bbcc447f691ffc3cf7a580e9e0587d87379e35e58b826ebf5bc7b"},
- {file = "tiktoken-0.5.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2b756a65d98b7cf760617a6b68762a23ab8b6ef79922be5afdb00f5e8a9f4e76"},
- {file = "tiktoken-0.5.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba9873c253ca1f670e662192a0afcb72b41e0ba3e730f16c665099e12f4dac2d"},
- {file = "tiktoken-0.5.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:74c90d2be0b4c1a2b3f7dde95cd976757817d4df080d6af0ee8d461568c2e2ad"},
- {file = "tiktoken-0.5.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:709a5220891f2b56caad8327fab86281787704931ed484d9548f65598dea9ce4"},
- {file = "tiktoken-0.5.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5d5a187ff9c786fae6aadd49f47f019ff19e99071dc5b0fe91bfecc94d37c686"},
- {file = "tiktoken-0.5.1-cp39-cp39-win_amd64.whl", hash = "sha256:e21840043dbe2e280e99ad41951c00eff8ee3b63daf57cd4c1508a3fd8583ea2"},
- {file = "tiktoken-0.5.1.tar.gz", hash = "sha256:27e773564232004f4f810fd1f85236673ec3a56ed7f1206fc9ed8670ebedb97a"},
+ {file = "tiktoken-0.5.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8c4e654282ef05ec1bd06ead22141a9a1687991cef2c6a81bdd1284301abc71d"},
+ {file = "tiktoken-0.5.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:7b3134aa24319f42c27718c6967f3c1916a38a715a0fa73d33717ba121231307"},
+ {file = "tiktoken-0.5.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6092e6e77730929c8c6a51bb0d7cfdf1b72b63c4d033d6258d1f2ee81052e9e5"},
+ {file = "tiktoken-0.5.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:72ad8ae2a747622efae75837abba59be6c15a8f31b4ac3c6156bc56ec7a8e631"},
+ {file = "tiktoken-0.5.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:51cba7c8711afa0b885445f0637f0fcc366740798c40b981f08c5f984e02c9d1"},
+ {file = "tiktoken-0.5.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:3d8c7d2c9313f8e92e987d585ee2ba0f7c40a0de84f4805b093b634f792124f5"},
+ {file = "tiktoken-0.5.2-cp310-cp310-win_amd64.whl", hash = "sha256:692eca18c5fd8d1e0dde767f895c17686faaa102f37640e884eecb6854e7cca7"},
+ {file = "tiktoken-0.5.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:138d173abbf1ec75863ad68ca289d4da30caa3245f3c8d4bfb274c4d629a2f77"},
+ {file = "tiktoken-0.5.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7388fdd684690973fdc450b47dfd24d7f0cbe658f58a576169baef5ae4658607"},
+ {file = "tiktoken-0.5.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a114391790113bcff670c70c24e166a841f7ea8f47ee2fe0e71e08b49d0bf2d4"},
+ {file = "tiktoken-0.5.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ca96f001e69f6859dd52926d950cfcc610480e920e576183497ab954e645e6ac"},
+ {file = "tiktoken-0.5.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:15fed1dd88e30dfadcdd8e53a8927f04e1f6f81ad08a5ca824858a593ab476c7"},
+ {file = "tiktoken-0.5.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:93f8e692db5756f7ea8cb0cfca34638316dcf0841fb8469de8ed7f6a015ba0b0"},
+ {file = "tiktoken-0.5.2-cp311-cp311-win_amd64.whl", hash = "sha256:bcae1c4c92df2ffc4fe9f475bf8148dbb0ee2404743168bbeb9dcc4b79dc1fdd"},
+ {file = "tiktoken-0.5.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b76a1e17d4eb4357d00f0622d9a48ffbb23401dcf36f9716d9bd9c8e79d421aa"},
+ {file = "tiktoken-0.5.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:01d8b171bb5df4035580bc26d4f5339a6fd58d06f069091899d4a798ea279d3e"},
+ {file = "tiktoken-0.5.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42adf7d4fb1ed8de6e0ff2e794a6a15005f056a0d83d22d1d6755a39bffd9e7f"},
+ {file = "tiktoken-0.5.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c3f894dbe0adb44609f3d532b8ea10820d61fdcb288b325a458dfc60fefb7db"},
+ {file = "tiktoken-0.5.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:58ccfddb4e62f0df974e8f7e34a667981d9bb553a811256e617731bf1d007d19"},
+ {file = "tiktoken-0.5.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:58902a8bad2de4268c2a701f1c844d22bfa3cbcc485b10e8e3e28a050179330b"},
+ {file = "tiktoken-0.5.2-cp312-cp312-win_amd64.whl", hash = "sha256:5e39257826d0647fcac403d8fa0a474b30d02ec8ffc012cfaf13083e9b5e82c5"},
+ {file = "tiktoken-0.5.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8bde3b0fbf09a23072d39c1ede0e0821f759b4fa254a5f00078909158e90ae1f"},
+ {file = "tiktoken-0.5.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2ddee082dcf1231ccf3a591d234935e6acf3e82ee28521fe99af9630bc8d2a60"},
+ {file = "tiktoken-0.5.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:35c057a6a4e777b5966a7540481a75a31429fc1cb4c9da87b71c8b75b5143037"},
+ {file = "tiktoken-0.5.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4c4a049b87e28f1dc60509f8eb7790bc8d11f9a70d99b9dd18dfdd81a084ffe6"},
+ {file = "tiktoken-0.5.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:5bf5ce759089f4f6521ea6ed89d8f988f7b396e9f4afb503b945f5c949c6bec2"},
+ {file = "tiktoken-0.5.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:0c964f554af1a96884e01188f480dad3fc224c4bbcf7af75d4b74c4b74ae0125"},
+ {file = "tiktoken-0.5.2-cp38-cp38-win_amd64.whl", hash = "sha256:368dd5726d2e8788e47ea04f32e20f72a2012a8a67af5b0b003d1e059f1d30a3"},
+ {file = "tiktoken-0.5.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a2deef9115b8cd55536c0a02c0203512f8deb2447f41585e6d929a0b878a0dd2"},
+ {file = "tiktoken-0.5.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:2ed7d380195affbf886e2f8b92b14edfe13f4768ff5fc8de315adba5b773815e"},
+ {file = "tiktoken-0.5.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c76fce01309c8140ffe15eb34ded2bb94789614b7d1d09e206838fc173776a18"},
+ {file = "tiktoken-0.5.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60a5654d6a2e2d152637dd9a880b4482267dfc8a86ccf3ab1cec31a8c76bfae8"},
+ {file = "tiktoken-0.5.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:41d4d3228e051b779245a8ddd21d4336f8975563e92375662f42d05a19bdff41"},
+ {file = "tiktoken-0.5.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:a5c1cdec2c92fcde8c17a50814b525ae6a88e8e5b02030dc120b76e11db93f13"},
+ {file = "tiktoken-0.5.2-cp39-cp39-win_amd64.whl", hash = "sha256:84ddb36faedb448a50b246e13d1b6ee3437f60b7169b723a4b2abad75e914f3e"},
+ {file = "tiktoken-0.5.2.tar.gz", hash = "sha256:f54c581f134a8ea96ce2023ab221d4d4d81ab614efa0b2fbce926387deb56c80"},
]
[package.dependencies]
@@ -4686,13 +4713,13 @@ watchdog = ["watchdog (>=2.3)"]
[[package]]
name = "wheel"
-version = "0.41.3"
+version = "0.42.0"
description = "A built-package format for Python"
optional = false
python-versions = ">=3.7"
files = [
- {file = "wheel-0.41.3-py3-none-any.whl", hash = "sha256:488609bc63a29322326e05560731bf7bfea8e48ad646e1f5e40d366607de0942"},
- {file = "wheel-0.41.3.tar.gz", hash = "sha256:4d4987ce51a49370ea65c0bfd2234e8ce80a12780820d9dc462597a6e60d0841"},
+ {file = "wheel-0.42.0-py3-none-any.whl", hash = "sha256:177f9c9b0d45c47873b619f5b650346d632cdc35fb5e4d25058e09c9e581433d"},
+ {file = "wheel-0.42.0.tar.gz", hash = "sha256:c45be39f7882c9d34243236f2d63cbd58039e360f85d0913425fbd7ceea617a8"},
]
[package.extras]
@@ -5004,4 +5031,4 @@ multidict = ">=4.0"
[metadata]
lock-version = "2.0"
python-versions = ">=3.10,<3.12"
-content-hash = "d5cec8b014ae7206884b1535e1244ea71a211ee054c64a0ab6b52144fcb6df90"
+content-hash = "e3c2b60f106ff965f422be2aee07a052d61eaceb712e1597dcb6c116385ed7b6"
diff --git a/pyproject.toml b/pyproject.toml
index 4be88639b..c8bc734b5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "deepeval"
-version = "0.20.24"
+version = "0.20.29"
description = "The Evaluation Framework for LLMs"
authors = ["Jeffrey Ip "]
license = "Apache-2.0"
@@ -40,6 +40,7 @@ black = "*"
portalocker = "*"
openai = "*"
langchain = "*"
+protobuf = "^4.21.6"
[tool.black]
line-length = 80
diff --git a/setup.py b/setup.py
index 0c432fd02..a915dcc2b 100644
--- a/setup.py
+++ b/setup.py
@@ -25,7 +25,7 @@
"pytest",
"typer==0.9.0",
"rich",
- "protobuf<=3.20.5",
+ "protobuf>=4.21.6",
"pandas",
"pydantic", # loosen pydantic requirements as we support multiple
"sentry-sdk",
@@ -35,15 +35,13 @@
"rouge_score==0.1.2",
"nltk==3.8.1",
"ragas",
+ "detoxify",
],
extras_require={
"bias": [
"tensorflow", # for bias
"Dbias", # for bias
],
- "toxic": [
- "detoxify", # for toxic classifier
- ],
"dev": ["black"],
},
entry_points={
diff --git a/tests/test_answer_relevancy.py b/tests/test_answer_relevancy.py
index 9ff598fca..596af2465 100644
--- a/tests/test_answer_relevancy.py
+++ b/tests/test_answer_relevancy.py
@@ -4,7 +4,7 @@
from deepeval.test_case import LLMTestCase
from deepeval.metrics import AnswerRelevancyMetric
-from deepeval.evaluator import assert_test, run_test
+from deepeval import assert_test, run_test
query = "What is Python?"
answer = "Python is a programming language?"
diff --git a/tests/test_bias.py b/tests/test_bias.py
index 2e61210a4..59d6697af 100644
--- a/tests/test_bias.py
+++ b/tests/test_bias.py
@@ -5,7 +5,7 @@
import pytest
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
from deepeval.metrics import UnBiasedMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
output = "Winners of the FIFA world cup were the French national football team"
diff --git a/tests/test_custom_metric.py b/tests/test_custom_metric.py
index 7148bb36f..f9e75db0a 100644
--- a/tests/test_custom_metric.py
+++ b/tests/test_custom_metric.py
@@ -3,7 +3,7 @@
from deepeval.test_case import LLMTestCase
from deepeval.metrics import BaseMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
class LengthMetric(BaseMetric):
diff --git a/tests/test_dataset.py b/tests/test_dataset.py
index 5e83db2e7..770bb05fc 100644
--- a/tests/test_dataset.py
+++ b/tests/test_dataset.py
@@ -2,6 +2,9 @@
import pytest
from deepeval.dataset import EvaluationDataset
+from deepeval.metrics import HallucinationMetric
+from deepeval import assert_test
+from deepeval.test_case import LLMTestCase
dataset = EvaluationDataset()
@@ -28,3 +31,15 @@ def test_create_dataset():
context_key_name="context",
)
assert len(dataset.test_cases) == 10, "Test Cases not loaded from JSON"
+
+ # dataset.push("alias")
+
+
+# dataset.pull("alias")
+# @pytest.mark.parametrize(
+# "test_case",
+# dataset,
+# )
+# def test_customer_chatbot(test_case: LLMTestCase):
+# hallucination_metric = HallucinationMetric(minimum_score=0.3)
+# assert_test(test_case, [hallucination_metric])
diff --git a/tests/test_hallucination_metric.py b/tests/test_hallucination_metric.py
index c7e3ad6be..c37518785 100644
--- a/tests/test_hallucination_metric.py
+++ b/tests/test_hallucination_metric.py
@@ -1,7 +1,7 @@
import pytest
from deepeval.test_case import LLMTestCase
from deepeval.metrics import HallucinationMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
def test_hallucination_metric():
diff --git a/tests/test_llm_metric.py b/tests/test_llm_metric.py
index ffaef5532..4183ee2ff 100644
--- a/tests/test_llm_metric.py
+++ b/tests/test_llm_metric.py
@@ -2,7 +2,7 @@
import openai
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
from deepeval.metrics import LLMEvalMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
def test_chat_completion():
diff --git a/tests/test_quickstart.py b/tests/test_quickstart.py
index c08ebd93d..504cb2b87 100644
--- a/tests/test_quickstart.py
+++ b/tests/test_quickstart.py
@@ -5,7 +5,7 @@
from deepeval.metrics import HallucinationMetric
from deepeval.test_case import LLMTestCase
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
def generate_llm_output(query: str):
diff --git a/tests/test_ragas.py b/tests/test_ragas.py
index 5dff6575e..e05f5c2d6 100644
--- a/tests/test_ragas.py
+++ b/tests/test_ragas.py
@@ -12,7 +12,7 @@
MaliciousnessMetric,
)
from deepeval.metrics.ragas_metric import AnswerRelevancyMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test, evaluate
query = "Who won the FIFA World Cup in 2018?"
output = "Winners of the FIFA world cup were the French national football team"
@@ -50,10 +50,10 @@ def test_everything():
metric1 = ContextualRelevancyMetric()
metric2 = FaithfulnessMetric()
metric3 = ContextRecallMetric()
- metric4 = ConcisenessMetric()
- metric5 = CorrectnessMetric()
- metric6 = CoherenceMetric()
- metric7 = MaliciousnessMetric()
+ # metric4 = ConcisenessMetric()
+ # metric5 = CorrectnessMetric()
+ # metric6 = CoherenceMetric()
+ # metric7 = MaliciousnessMetric()
metric8 = AnswerRelevancyMetric()
metric9 = ContextualPrecisionMetric()
metric10 = RagasMetric()
diff --git a/tests/test_toxic.py b/tests/test_toxic.py
index 8e0f46868..64f26e4cd 100644
--- a/tests/test_toxic.py
+++ b/tests/test_toxic.py
@@ -5,7 +5,7 @@
import pytest
from deepeval.test_case import LLMTestCase, LLMTestCaseParams
from deepeval.metrics import NonToxicMetric
-from deepeval.evaluator import assert_test
+from deepeval import assert_test
output = "Winners of the FIFA world cup were the French national football team"