Releases: ZenGuard-AI/fast-llm-security-guardrails
Releases · ZenGuard-AI/fast-llm-security-guardrails
v0.2.1: 2024-12-02
BREAKING CHANGE: removed detect_async method
What's Changed
- import Tier by @kainamer in #52
- [version] up the version by @baur-krykpayev in #53
- toxicity removed by @kenesaryy in #54
- fix - detect_async method and IGNORE zenguard e2e tests by @nurvdil in #55
New Contributors
Full Changelog: v0.1.20...v0.2.1
v0.1.20: 2024-10-21
Introduce tiered API endpoints.
v0.1.19: 2024 - 2024-06-25
Fix the bugs in the
- OpenAI Integration
- Unit tests
v0.1.18: 2024 - 2024-06-17
New functionality added:
- Async Prompt Injection detection support.
- Prompt Injection detection reporting support.
- Added unit tests
Reworked:
- zenguard's detect methods now raise errors.
v0.1.15: 2024 - 2024-06-10
- If detector is single use a dedicated API
- Provide latency numbers
- Remove prints
v0.1.13: 2024 - 2024-05-07
Added running detectors in parallel.
v0.1.10: 2024 - 2024-04-23
Changed
- API change
message
is changed tomessages
and accepts the list of strings.
v0.1.8: 2024 - 2024-04-21
Added
- Support for Toxicity Detectors
- Bug fixes