Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[executorch] Migrate runtime/executor tests to new namespace #4616

Closed
wants to merge 43 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
f5bf1b2
[executorch] Migrate runtime/executor tests to new namespace
dbort Aug 9, 2024
3273e8b
Update base for Update on "[executorch] Migrate runtime/executor test…
dbort Aug 9, 2024
cd0c0b7
Update on "[executorch] Migrate runtime/executor tests to new namespace"
dbort Aug 9, 2024
d050b24
Update base for Update on "[executorch] Migrate runtime/executor test…
dbort Aug 9, 2024
7d19167
Update on "[executorch] Migrate runtime/executor tests to new namespace"
dbort Aug 9, 2024
768c28d
Update base for Update on "[executorch] Migrate runtime/executor test…
dbort Aug 9, 2024
deb5d79
Update on "[executorch] Migrate runtime/executor tests to new namespace"
dbort Aug 9, 2024
7b27f9b
Back out "Back out "[executorch][PR] Add stories ci for qnn""
cccclai Aug 14, 2024
6efc222
[llava][19/N] Add multimodal runner base class and build file
larryliu0820 Aug 14, 2024
0b51363
Add 256 constraint executorch.max_kernel_num for
devanl Aug 14, 2024
023ab35
Use MmapDataLoader::MlockConfig::NoMlock for Module::LoadMode::Mmap
kirklandsign Aug 14, 2024
2780528
Update bundle identifier.
shoumikhin Aug 14, 2024
c541bc1
Fix return type mismatch in choose_qparams_tensor_out
python3kgae Aug 14, 2024
1cb97e0
Initial Implementation of MediaTek Backend for Executorch
neuropilot-captain Aug 14, 2024
ef56414
[XNNPACK] Share workspace across delegate instances
digantdesai Aug 14, 2024
7e8a4fb
Update base for Update on "[executorch] Migrate runtime/executor test…
dbort Aug 14, 2024
d0b3fdf
Update on "[executorch] Migrate runtime/executor tests to new namespace"
dbort Aug 14, 2024
84100d1
[llava][20/N] Add llava runner using building blocks in e/llm/runner …
larryliu0820 Aug 15, 2024
f1b741e
Use the common return_type field to support ET-QNN internally and ext…
derekxu Aug 15, 2024
5c4a2a2
[MPS] Add support for Int4 groupwise quantization
DenisVieriu97 Aug 15, 2024
35a15a6
[llava] Fix llava test-model-linux CI job
larryliu0820 Aug 15, 2024
a9ed835
[Core ML] Implement intermediate tensor logging
cymbalrush Aug 15, 2024
54f8932
API life cycle and deprecation policy in official documentation
Olivia-liu Aug 15, 2024
938748b
FuseDequantLinearPass to convert dq -> linear into weight_int8packed_mm
nathanaelsee Aug 15, 2024
48b4304
Implement mm op for Arm backend
tom-arm Aug 15, 2024
35da5bf
Add event tracing and ETDumps to executor_runner
benkli01 Aug 15, 2024
caadd81
VulkanQuantizer for weight-only quantization on linear
nathanaelsee Aug 15, 2024
c4ccad3
added expand and gelu ops
zonglinpengmeta Aug 15, 2024
ba2ff63
Add QnnBackend dependency to the ET main test binary app in buck for …
derekxu Aug 15, 2024
2b9c4b2
[executorch] Migrate runtime/platform to new namespace
dbort Aug 15, 2024
9b0b8e7
Fix android perf periodic default spec
huydhn Aug 15, 2024
bf29bd6
[executorch] Migrate runtime/platform tests to new namespace
dbort Aug 15, 2024
39aeff9
Back out "Implement intermediate tensor logging"
cccclai Aug 15, 2024
2dcf0f3
cria runner
cccclai Aug 15, 2024
ae299cf
[executorch] Migrate runtime/core to new namespace
dbort Aug 15, 2024
3e4508a
Refactor delegation code
angelayi Aug 15, 2024
f25f135
[executorch] Migrate runtime/core tests to new namespace
dbort Aug 15, 2024
aead1d5
Revert "Add event tracing and ETDumps to executor_runner"
GregoryComer Aug 15, 2024
9a98abb
Back out "Back out "[executorch][PR] [Core ML] Implement intermediate…
cccclai Aug 15, 2024
add6e2e
Support mutable tensors in TensorParser
JacobSzwejbka Aug 16, 2024
5c9a00a
Make the Module non-movable.
shoumikhin Aug 16, 2024
09ab2f3
Update base for Update on "[executorch] Migrate runtime/executor test…
dbort Aug 16, 2024
718d145
Update on "[executorch] Migrate runtime/executor tests to new namespace"
dbort Aug 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .ci/scripts/build-qnn-sdk.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

set -ex

build_qnn_backend() {
echo "Start building qnn backend."
export ANDROID_NDK_ROOT=/opt/ndk
export QNN_SDK_ROOT=/tmp/qnn/2.23.0.240531
export EXECUTORCH_ROOT="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")/.." && pwd)"

bash backends/qualcomm/scripts/build.sh --skip_aarch64 --job_number 2 --release
}

build_qnn_backend
29 changes: 29 additions & 0 deletions .ci/scripts/setup-qnn-deps.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.

set -ex

install_qnn() {
echo "Start installing qnn."
QNN_INSTALLATION_DIR=/tmp/qnn
mkdir -p "${QNN_INSTALLATION_DIR}"

curl -Lo /tmp/v2.23.0.24.06.24.zip "https://softwarecenter.qualcomm.com/api/download/software/qualcomm_neural_processing_sdk/v2.23.0.24.06.24.zip"
echo "Finishing downloading qnn sdk."
unzip -qo /tmp/v2.23.0.24.06.24.zip -d /tmp
echo "Finishing unzip qnn sdk."


# Print the content for manual verification
ls -lah "/tmp/qairt"
mv "/tmp/qairt"/* "${QNN_INSTALLATION_DIR}"
echo "Finishing installing qnn '${QNN_INSTALLATION_DIR}' ."

ls -lah "${QNN_INSTALLATION_DIR}"
}

install_qnn
24 changes: 24 additions & 0 deletions .ci/scripts/test_llama.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,25 @@ fi

echo "COREML option ${COREML}"

if [[ "${MODE}" =~ .*qnn.* ]]; then
QNN=ON
export EXECUTORCH_ROOT="$(cd -- "$(dirname -- "${BASH_SOURCE[0]}")/.." && pwd)"
export QNN_SDK_ROOT=/tmp/qnn/2.23.0.240531
export LD_LIBRARY_PATH="${QNN_SDK_ROOT}/lib/x86_64-linux-clang"
export PYTHONPATH=".."
cp schema/program.fbs exir/_serialize/program.fbs
cp schema/scalar_type.fbs exir/_serialize/scalar_type.fbs
cp -f build-x86/backends/qualcomm/PyQnnManagerAdaptor.cpython-310-x86_64-linux-gnu.so backends/qualcomm/python
cp -f build-x86/backends/qualcomm/PyQnnWrapperAdaptor.cpython-310-x86_64-linux-gnu.so backends/qualcomm/python

else
QNN=OFF
QNN_SDK_ROOT=""
fi

echo "QNN option ${QNN}"
echo "QNN_SDK_ROOT: ${QNN_SDK_ROOT}"

if [[ -z "${BUCK:-}" ]]; then
BUCK=buck2
fi
Expand All @@ -96,6 +115,8 @@ cmake_install_executorch_libraries() {
-DEXECUTORCH_BUILD_XNNPACK="$XNNPACK" \
-DEXECUTORCH_BUILD_MPS="$MPS" \
-DEXECUTORCH_BUILD_COREML="$COREML" \
-DEXECUTORCH_BUILD_QNN="$QNN" \
-DQNN_SDK_ROOT="$QNN_SDK_ROOT" \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" \
-Bcmake-out .
cmake --build cmake-out -j9 --target install --config Debug
Expand Down Expand Up @@ -176,6 +197,9 @@ fi
if [[ "${COREML}" == "ON" ]]; then
EXPORT_ARGS="${EXPORT_ARGS} -kv -v --coreml --disable_dynamic_shape"
fi
if [[ "${QNN}" == "ON" ]]; then
EXPORT_ARGS="${EXPORT_ARGS} -kv -v --qnn --disable_dynamic_shape"
fi
# Add dynamically linked library location
$PYTHON_EXECUTABLE -m examples.models.llama2.export_llama ${EXPORT_ARGS}

Expand Down
5 changes: 2 additions & 3 deletions .github/workflows/android-perf.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ on:
description: The test spec to drive the test on AWS devices
required: false
type: string
default: https://ossci-android.s3.amazonaws.com/executorch/android-llm-device-farm-test-spec.yml
workflow_call:
inputs:
models:
Expand Down Expand Up @@ -65,7 +64,6 @@ on:
description: The test spec to drive the test on AWS devices
required: false
type: string
default: https://ossci-android.s3.amazonaws.com/executorch/android-llm-device-farm-test-spec.yml

concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
Expand Down Expand Up @@ -268,6 +266,7 @@ jobs:
# TODO: Hard code llm_demo_bpe for now in this job.
android-app-archive: https://gha-artifacts.s3.amazonaws.com/${{ github.repository }}/${{ github.run_id }}/artifact/llm_demo_bpe/app-debug.apk
android-test-archive: https://gha-artifacts.s3.amazonaws.com/${{ github.repository }}/${{ github.run_id }}/artifact/llm_demo_bpe/app-debug-androidTest.apk
test-spec: ${{ inputs.test_spec }}
# NB: Need to set the default spec here so that it works for periodic too
test-spec: ${{ inputs.test_spec || 'https://ossci-android.s3.amazonaws.com/executorch/android-llm-device-farm-test-spec.yml' }}
# Uploaded to S3 from the previous job
extra-data: https://gha-artifacts.s3.amazonaws.com/${{ github.repository }}/${{ github.run_id }}/artifact/${{ matrix.model }}_${{ matrix.delegate }}/model.zip
35 changes: 35 additions & 0 deletions .github/workflows/pull.yml
Original file line number Diff line number Diff line change
Expand Up @@ -369,3 +369,38 @@ jobs:

# Run pytest with coverage
pytest -c /dev/null -v -n auto --cov=./ --cov-report=xml backends/arm/test


test-llama-runner-qnn-linux:
name: test-llama-runner-qnn-linux
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
strategy:
matrix:
dtype: [fp32]
build-tool: [cmake]
mode: [qnn]
fail-fast: false
with:
runner: linux.2xlarge
docker-image: executorch-ubuntu-22.04-clang12-android
submodules: 'true'
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
timeout: 900
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
conda activate "${CONDA_ENV}"

DTYPE=${{ matrix.dtype }}
BUILD_TOOL=${{ matrix.build-tool }}
MODE=${{ matrix.mode }}

PYTHON_EXECUTABLE=python bash .ci/scripts/setup-qnn-deps.sh
PYTHON_EXECUTABLE=python bash .ci/scripts/build-qnn-sdk.sh

# Setup executorch
PYTHON_EXECUTABLE=python bash .ci/scripts/setup-linux.sh buck2
# Install requirements for export_llama
PYTHON_EXECUTABLE=python bash examples/models/llama2/install_requirements.sh
# Test llama2
PYTHON_EXECUTABLE=python bash .ci/scripts/test_llama.sh stories110M.pt "${BUILD_TOOL}" "${DTYPE}" "${MODE}"
6 changes: 6 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,8 @@ option(EXECUTORCH_BUILD_GTESTS "Build googletest based test binaries" OFF)

option(EXECUTORCH_BUILD_MPS "Build the MPS backend" OFF)

option(EXECUTORCH_BUILD_NEURON "Build the backends/mediatek directory" OFF)

option(EXECUTORCH_BUILD_PYBIND "Build the Python Bindings" OFF)

option(EXECUTORCH_BUILD_QNN "Build the Qualcomm backend" OFF)
Expand Down Expand Up @@ -624,6 +626,10 @@ if(EXECUTORCH_BUILD_EXTENSION_MODULE)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/extension/module)
endif()

if(EXECUTORCH_BUILD_NEURON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/backends/mediatek)
endif()

if(EXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/extension/runner_util)
endif()
Expand Down
2 changes: 2 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ We actively welcome your pull requests (PRs).
See the [testing section](#testing) for more information.
1. If you've changed APIs or added a new tool or feature, [update the
documentation](#updating-documentation).
1. If you added an experimental API or deprecated an existing API, follow the
[API Life Cycle and Deprecation Policy](/docs/source/api-life-cycle.md).
1. Make sure your code follows the [style guides](#coding-style) and passes the
[lint checks](#lintrunner).
1. If you haven't already, complete the [Contributor License Agreement ("CLA")](#contributor-license-agreement-cla).
Expand Down
1 change: 1 addition & 0 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Copyright (c) Meta Platforms, Inc. and affiliates.
Copyright 2023 Arm Limited and/or its affiliates.
Copyright (c) Qualcomm Innovation Center, Inc.
Copyright (c) 2023 Apple Inc.
Copyright (c) 2024 MediaTek Inc.

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
Expand Down
8 changes: 8 additions & 0 deletions backends/apple/coreml/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ if(NOT EXECUTORCH_ROOT)
set(EXECUTORCH_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/../../..)
endif()

if(EXECUTORCH_BUILD_SDK)
# protobuf requires frtti
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -frtti" )
endif()

option(COREML_BUILD_EXECUTOR_RUNNER "Build CoreML executor runner." OFF)

# inmemoryfs sources
Expand Down Expand Up @@ -59,6 +64,7 @@ set(SDK_SOURCES
runtime/sdk/ETCoreMLModelAnalyzer.mm
runtime/sdk/ETCoreMLModelStructurePath.mm
runtime/sdk/ETCoreMLOperationProfilingInfo.mm
runtime/sdk/ETCoreMLModelDebugInfo.mm
runtime/sdk/ETCoreMLModelDebugger.mm
runtime/sdk/ETCoreMLModelProfiler.mm
runtime/sdk/ETCoreMLPair.mm
Expand Down Expand Up @@ -141,6 +147,8 @@ if(EXECUTORCH_BUILD_SDK)
add_subdirectory(
${CMAKE_CURRENT_SOURCE_DIR}/third-party/coremltools/deps/protobuf/cmake
)

target_link_options_shared_lib(libprotobuf-lite)
target_link_libraries(coremldelegate PRIVATE libprotobuf-lite)
endif()

Expand Down
72 changes: 28 additions & 44 deletions backends/apple/coreml/runtime/delegate/ETCoreMLModelManager.mm
Original file line number Diff line number Diff line change
Expand Up @@ -5,36 +5,37 @@
//
// Please refer to the license found in the LICENSE file in the root directory of the source tree.

#import <ETCoreMLAsset.h>
#import <ETCoreMLAssetManager.h>
#import <ETCoreMLDefaultModelExecutor.h>
#import <ETCoreMLLogging.h>
#import <ETCoreMLModel.h>
#import <ETCoreMLModelCompiler.h>
#import <ETCoreMLModelExecutor.h>
#import <ETCoreMLModelLoader.h>
#import <ETCoreMLModelManager.h>
#import <ETCoreMLStrings.h>
#import <MLModel_Prewarm.h>
#import <MLMultiArray_Copy.h>
#import "ETCoreMLAsset.h"
#import "ETCoreMLAssetManager.h"
#import "ETCoreMLDefaultModelExecutor.h"
#import "ETCoreMLLogging.h"
#import "ETCoreMLModel.h"
#import "ETCoreMLModelCompiler.h"
#import "ETCoreMLModelExecutor.h"
#import "ETCoreMLModelLoader.h"
#import "ETCoreMLModelManager.h"
#import "ETCoreMLStrings.h"
#import "MLModel_Prewarm.h"
#import "MLMultiArray_Copy.h"
#import <filesystem>
#import <inmemory_filesystem_utils.hpp>
#import "inmemory_filesystem_utils.hpp"
#import <iostream>
#import <memory>
#import <model_metadata.h>
#import <multiarray.h>
#import <objc_array_util.h>
#import "model_metadata.h"
#import "multiarray.h"
#import "objc_array_util.h"
#import <optional>
#import <os/lock.h>
#import <serde_json.h>
#import "serde_json.h"
#import <string>
#import <system_error>
#import <vector>

#if ET_EVENT_TRACER_ENABLED
#import <ETCoreMLModelAnalyzer.h>
#import <ETCoreMLModelStructurePath.h>
#import <objc_safe_cast.h>
#import "ETCoreMLModelAnalyzer.h"
#import "ETCoreMLModelDebugInfo.h"
#import "ETCoreMLModelStructurePath.h"
#import "objc_safe_cast.h"
#endif

namespace {
Expand Down Expand Up @@ -317,31 +318,14 @@ void add_compute_unit(std::string& identifier, MLComputeUnits compute_units) {
return [[ETCoreMLAsset alloc] initWithBackingAsset:std::move(backingAsset.value())];
}

NSDictionary<ETCoreMLModelStructurePath *, NSString *> * _Nullable get_operation_path_to_symbol_name_map(const inmemoryfs::InMemoryFileSystem *inMemoryFS,
NSError * __autoreleasing *error) {
ETCoreMLModelDebugInfo * _Nullable get_model_debug_info(const inmemoryfs::InMemoryFileSystem *inMemoryFS,
NSError * __autoreleasing *error) {
NSData *file_data = get_file_data(inMemoryFS, ETCoreMLStrings.debugInfoFileRelativePath);
if (!file_data) {
return nil;
}

id object = [NSJSONSerialization JSONObjectWithData:file_data options:(NSJSONReadingOptions)0 error:error];
if (!object) {
return nil;
}

NSDictionary<NSString *, id> *json_dict = SAFE_CAST(object, NSDictionary);
NSMutableDictionary<ETCoreMLModelStructurePath *, NSString *> *result = [NSMutableDictionary dictionaryWithCapacity:json_dict.count];
NSDictionary<NSString *, NSArray<id> *> *debug_symbol_to_operation_path_map = SAFE_CAST(json_dict[ETCoreMLStrings.debugSymbolToOperationPathKeyName], NSDictionary);
for (NSString *symbol_name in debug_symbol_to_operation_path_map) {
NSArray<NSDictionary<NSString *, id> *> *components = SAFE_CAST(debug_symbol_to_operation_path_map[symbol_name], NSArray);
if (components.count == 0) {
continue;
}
ETCoreMLModelStructurePath *path = [[ETCoreMLModelStructurePath alloc] initWithComponents:components];
result[path] = symbol_name;
}

return result;

return [ETCoreMLModelDebugInfo modelDebugInfoFromData:file_data error:error];
}

#endif
Expand Down Expand Up @@ -490,16 +474,16 @@ - (nullable NSURL *)compiledModelURLWithIdentifier:(NSString *)identifier
}

NSError *localError = nil;
NSDictionary<ETCoreMLModelStructurePath *, NSString *> *operation_path_to_symbol_name_map = get_operation_path_to_symbol_name_map(inMemoryFS,
&localError);
ETCoreMLModelDebugInfo *debug_info = get_model_debug_info(inMemoryFS, &localError);
if (localError) {
ETCoreMLLogError(localError, "Failed to parse debug info file");
}


return [[ETCoreMLModelAnalyzer alloc] initWithCompiledModelAsset:compiledModelAsset
modelAsset:modelAsset
modelDebugInfo:debug_info
metadata:metadata
operationPathToDebugSymbolMap:operation_path_to_symbol_name_map
configuration:configuration
assetManager:self.assetManager
error:error];
Expand Down
2 changes: 2 additions & 0 deletions backends/apple/coreml/runtime/delegate/ETCoreMLStrings.h
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ NS_ASSUME_NONNULL_BEGIN
@property (class, copy, readonly, nonatomic, nullable) NSString* debugInfoFileRelativePath;
/// The debug symbol to operation path key name.
@property (class, copy, readonly, nonatomic, nullable) NSString* debugSymbolToOperationPathKeyName;
/// The debug symbol to handles key name.
@property (class, copy, readonly, nonatomic, nullable) NSString* debugSymbolToHandlesKeyName;

@end

Expand Down
5 changes: 5 additions & 0 deletions backends/apple/coreml/runtime/delegate/ETCoreMLStrings.mm
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,11 @@ + (NSString *)debugSymbolToOperationPathKeyName {
return ETCoreMLDebugSymbolToOperationPathKeyName;
}

+ (NSString *)debugSymbolToHandlesKeyName {
static NSString * const ETCoreMLDebugSymbolToHandlesKeyName = @"debugSymbolToHandles";
return ETCoreMLDebugSymbolToHandlesKeyName;
}

+ (nullable NSString *)assetsDirectoryPath {
static dispatch_once_t onceToken;
static NSString *result = nil;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,8 @@ ModelLoggingOptions get_logging_options(BackendExecutionContext& context) {
auto event_tracer = context.event_tracer();
if (event_tracer) {
options.log_profiling_info = true;
options.log_intermediate_tensors = event_tracer->intermediate_outputs_logging_status();
auto debug_level = event_tracer->event_tracer_debug_level();
options.log_intermediate_tensors = (debug_level >= EventTracerDebugLogLevel::kIntermediateOutputs);
}

return options;
Expand Down
4 changes: 2 additions & 2 deletions backends/apple/coreml/runtime/delegate/model_event_logger.h
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ class ModelEventLogger {
///
/// @param op_path_to_value_map A dictionary with the operation path as the key and the operation's value as the
/// value.
/// @param op_path_to_debug_symbol_name_map A dictionary with the operation path as the key and the symbol name as
/// the value. The symbol name is the delegate handle.
/// @param op_path_to_debug_symbol_name_map A dictionary with the operation path as the key and the debug symbol
/// name as the value.
virtual void log_intermediate_tensors(
NSDictionary<ETCoreMLModelStructurePath*, MLMultiArray*>* op_path_to_value_map,
NSDictionary<ETCoreMLModelStructurePath*, NSString*>* op_path_to_debug_symbol_name_map) const noexcept = 0;
Expand Down
Loading