DeepSparse v1.6.0
New Features:
-
Version support added:
-
Decoder-only text generation LLMs are optimized in DeepSparse and offer state-of-the-art performance with sparsity!
pip install deepsparse[llm]
and then use the TextGeneration Pipeline. For performance details, check out our Sparse Fine-Tuning paper.
(#1022, #1035, #1061, #1081, #1132, #1122, #1137, #1121, #1139, #1126, #1151, #1140, #1173, #1166, #1176, #1172, #1190, #1142, #1205, #1204, #1212, #1214, #1194, #1218, #1196, #1217, #1216, #1225, #1240, #1254, #1246, #1250, #1266, #1270, #1276, #1274, #1235, #1284, #1285, #1304, #1308, #1310, #1313, #1272) -
OpenAI-compatible DeepSparse Server has been added, enabling standard OpenAI requests for performant LLMs. (#1171, #1221, #1228, #1317)
-
MLServer-compatible pathways for DeepSparse Server to enable standard MLServer requests. (#1237)
-
CLIP model support for deployments and performance functionality is now enabled. (Documentation) (#1098, #1145, #1203)
-
Several encoder-decoder networks have been optimized for performance: Donut, Whisper, and T5.
-
Support for ARM processors is now generally available. ARMv8.2 or above is required for quantized performance. (#1307)
-
Support for macOS is now in Beta. macOS Ventura (version 13) or above and Apple silicon are required. (#1088, #1096, #1290, #1307)
-
DeepSparse Server updated to support generic pipeline Python implementations for easy extensibility. (#1033)
-
YOLOv8 deployment pipelines and model support have been added. (#1044, #1052, #1040, #1138, #1261)
-
AWS and GCP marketplace documentation added: AWS | GCP (#1056, #1057)
-
DigitalOcean marketplace integration added. (Documentation) (#1109)
-
DeepSparse Azure marketplace integration added. (Documentation) (#1066)
-
DeepSparse Pipeline timing added. To access, utilize
pipeline.timer_manager
or utilizedeepsparse.benchmark_pipeline
CLI. (#1062, #1150, #1268, #1259, #1294) -
TorchScriptEngine
class added to enable benchmarking and evaluation comparisons to DeepSparse. (#1015)
debug_analysis API now supports exporting CSVs, enabling easier analysis. (#1253) -
SentenceTransformers deployment and performance support have been added. (#1301)
Changes:
-
DeepSparse upgraded for the SparseZoo V2 model file structure changes, which expands the number of supported files and reduces the number of bytes that need to be downloaded for model checkpoints, folders, and files. (#1233, #1234, #1303, #1318)
-
YOLOv5 deployment pipelines migrated to install from
nm-yolov5
on PyPI and remove the autoinstall from thenm-yolov5
GitHub repository that would happen on invocation of the relevant pathways, enabling more predictable environments. (#1030, #1101, #1129, #1111, #1167) -
Docker builds are updated to consistently rebuild for new releases and nightlies. ( #1012, #1068, #1069, #1113, #1144)
-
Torchvision deployment pipelines have been upgraded to support 0.14.x. (#1034)
-
README and documentation updated to include: Slack Community name change, Contact Us form introduction, Python version changes; corrections for YOLOv5 torchvision, transformers, and SparseZoo broken links; and installation command. (#1041, #1042, #1043, #1039, #1048, #931, #960, #1279, #1282, #1280, #1313)
-
ONNX utilities are updated so that ONNX model arguments can be passed as either a model file path (past behavior) or an ONNX ModelProto Python object. (#1089)
-
Deployment directories containing a
model.onnx
will now load properly for all pipelines supported by DeepSparse Server. Before, specific paths needed to be supplied to the exactmodel.onnx
file rather than a deployment directory. (#1131) -
Flake8 updated to 6.1 to enable the latest standards for running make quality. (#1156)
-
Automatic link checking has been added to GitHub actions. (#1226)
-
DeepSparse Pipeline has been changed to make it printable, such that
__str__ and __repr__
is implemented and will show useful information when a pipeline is printed. (#1298) -
nm-transformers
package has been fully removed and replaced with the native transformers package that works with DeepSparse. (#1302)
Performance and Compression Improvements:
- The memory footprint used during model compilation for models with external weights has been greatly reduced.
- The memory footprint has been reduced by sharing weights between compiled engines, for example, when using bucketing.
- Matrix-Vector Multiplication (GEVM) with a sparse weight matrix is now supported for both performance and reduced memory footprint.
- Matrix-Matrix Multiplication (GEMM) with a sparse weight matrix is further optimized for performance and reduced memory footprint.
- AVX2-VNNI instructions are now used to improve the performance of DeepSparse.
- Grouped Query Attention (GQA) in transformers is now optimized.
- Improved performance of Gathers with constant data and dynamic indices, like the ones used for embeddings in transformers and recommendation models.
- The InstanceNormalization operator is now supported for performance.
- The Where operator has improved performance in some cases by fusing it onto other operators.
- The CLIP operator is now supported for performance with operands of any data type.
Resolved Issues:
- Assertion failures for GEMM operations with broadcast-stacked dimensions have been resolved.
- Updated unit and integration tests to remove temporary test files and limit test file creation, which were not being properly deleted. (#1058)
deepsparse.benchmark
was failing withAttributeError
when the-shapes
argument was supplied, causing no benchmarks to be measured. (#1071)- Deepsparse Server with a
model.onnx
file in the model directory was causing the server to raise an exception for image classification pipelines. (#1070) Generate_random_inputs
function no longer creates random data with shapes 0 when ONNX files containing dynamic dimensions were given. (#1086)- Pydantic version pinned to <2.0 preventing NameErrors from being raised anytime pipelines are constructed. (#1104)
- AWS Lambda serverless examples and implementations updated to avoid exceptions being thrown while running inference in AWS Lambda. (#1115)
- DeepSparse Pipelines: if
num_cores
was not supplied as an explicit kwarg for a bucketing pipeline, it would trigger a key error. This is now updated to ensure the pipeline works correctly withoutnum_cores
being explicitly supplied as an kwarg. (#1152) eval_downstream
for Transformers pathways no longer fails due to a PyTorch requirement not being installed. The fix now removes the PyTorch support dependency, and it runs correctly through. (#1187)- Reliability for unit test
test_pipeline_call_is_async
has been improved to produce consistent test results. (#1251, #1264, #1267) - Torchvision previously needed to be installed for any tests to pass, including transformers and other unrelated pipelines. If it was not installed, then the tests would fail with an import error. (#1251)
Known Issues:
- The compile time for dense LLMs can be very slow. Compile time to be addressed in forthcoming release.
- Docker images are not currently pushing. A resolution is forthcoming for functional Docker builds. [RESOLVED]