Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use rapids-logger to generate the cuml logger #6162

Merged
merged 36 commits into from
Jan 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
ae34b8f
Remove old loggers
vyasr Nov 18, 2024
0db46a1
Switch over to new logger and update all code to use the new enums (s…
vyasr Nov 18, 2024
53262f7
Stop changing pattern unnecessarily
vyasr Nov 18, 2024
a007134
Get C++ compiling
vyasr Nov 19, 2024
91c8e83
Switch to using new repo
vyasr Dec 3, 2024
f4fad02
Update to use the new functionality
vyasr Dec 6, 2024
c898c29
Get Python code compiling
vyasr Dec 6, 2024
4a3a18c
Ensure new level_enum is used everywhere it should be to specify log …
vyasr Dec 6, 2024
68ad0bd
Ensure that verbosity is consistently set using the level_enum
vyasr Dec 7, 2024
ee7e376
Fix some docstrings
vyasr Dec 7, 2024
81ba843
Fix one logger test
vyasr Dec 7, 2024
4e63f7d
Fix compilation of cuml-cpu
vyasr Dec 9, 2024
e6a9898
Enable flushing
vyasr Dec 10, 2024
69e2bec
Revert all pure Python changes
vyasr Dec 11, 2024
34d79da
Revert changes to public APIs and docstrings in Cython
vyasr Dec 11, 2024
fa77a86
Revert remaining changes and update base class for verbosity compatib…
vyasr Dec 12, 2024
0826775
Fix inversion of log levels
vyasr Dec 12, 2024
bcc766c
Rewrite verbosity on access instead of on save to appease sklearn checks
vyasr Dec 12, 2024
b9ca57d
Appease linter
vyasr Dec 12, 2024
0e7bbd2
Fix typing
vyasr Dec 13, 2024
e99cbcf
Turn of shallow clones
vyasr Dec 13, 2024
74c5f36
Also set the flush for the C++ test
vyasr Dec 14, 2024
ed428be
Fix setting of default logging level
vyasr Dec 17, 2024
1255885
Fix C++ flushing test
vyasr Dec 17, 2024
9a2fd45
Merge remote-tracking branch 'upstream/branch-25.02' into feat/logger
vyasr Dec 17, 2024
9ff2c0a
Try using a custom descriptor
vyasr Dec 18, 2024
8d8561d
Fix behavior of VerboseDescriptor
wphicks Dec 18, 2024
62895f0
Correct verbose handling in set_params
wphicks Dec 19, 2024
ae32097
Fix a couple of bugs in umap behavior
vyasr Dec 30, 2024
fab6638
Merge remote-tracking branch 'upstream/branch-25.02' into feat/logger
vyasr Dec 30, 2024
0e4a2e9
Merge remote-tracking branch 'upstream/branch-25.02' into feat/logger
vyasr Dec 31, 2024
d238f1e
Fix logger call to use commit hash
vyasr Dec 31, 2024
2cbbaca
Fix repo
vyasr Dec 31, 2024
0ee209f
Merge branch 'branch-25.02' into feat/logger
vyasr Jan 2, 2025
5b66f08
style
vyasr Jan 2, 2025
64f80f9
Fix lbfgs test verifying log level
vyasr Jan 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 25 additions & 3 deletions cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#=============================================================================
# Copyright (c) 2018-2024, NVIDIA CORPORATION.
# Copyright (c) 2018-2025, NVIDIA CORPORATION.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -109,6 +109,17 @@ set(RMM_LOGGING_LEVEL "INFO" CACHE STRING "Choose the logging level.")
set_property(CACHE RMM_LOGGING_LEVEL PROPERTY STRINGS "TRACE" "DEBUG" "INFO" "WARN" "ERROR" "CRITICAL" "OFF")
message(VERBOSE "CUML_CPP: RMM_LOGGING_LEVEL = '${RMM_LOGGING_LEVEL}'.")

# Set logging level
set(LIBCUML_LOGGING_LEVEL
"DEBUG"
CACHE STRING "Choose the logging level."
)
set_property(
CACHE LIBCUML_LOGGING_LEVEL PROPERTY STRINGS "TRACE" "DEBUG" "INFO" "WARN" "ERROR" "CRITICAL"
"OFF"
)
message(VERBOSE "CUML: LIBCUML_LOGGING_LEVEL = '${LIBCUML_LOGGING_LEVEL}'.")

if(BUILD_CUML_TESTS OR BUILD_PRIMS_TESTS)
# Needed because GoogleBenchmark changes the state of FindThreads.cmake, causing subsequent runs to
# have different values for the `Threads::Threads` target. Setting this flag ensures
Expand Down Expand Up @@ -220,6 +231,15 @@ endif()
rapids_cpm_init()
rapids_cmake_install_lib_dir(lib_dir)

# Not using rapids-cmake since we never want to find, always download.
CPMAddPackage(
NAME rapids_logger GITHUB_REPOSITORY rapidsai/rapids-logger GIT_SHALLOW FALSE GIT_TAG
4df3ee70c6746fd1b6c0dc14209dae2e2d4378c6 VERSION 4df3ee70c6746fd1b6c0dc14209dae2e2d4378c6
)
rapids_make_logger(
ML EXPORT_SET cuml-exports LOGGER_HEADER_DIR include/cuml/common/ LOGGER_MACRO_PREFIX CUML LOGGER_TARGET cuml_logger
)
Comment on lines +235 to +241
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I presume this is something that happens in all repos that use rapids-logger, I wonder if a function in rapids-cmake would be a good idea instead of using CPMAddPackage directly?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes absolutely. Every repository will have to do its own call to rapids_make_logger to provide the right arguments, but I plan to replace the CPMAddPackage call with a rapids-cmake call. I will do that a bit later though once I'm ready to synchronize all the repos because right now they are using different commit hashes and I'll be adding a couple more features to the trunk of rapids-logger before I synchronize all the repos using it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Starting to address this in rapidsai/rapids-cmake#737.


if(BUILD_CUML_TESTS OR BUILD_PRIMS_TESTS)
find_package(Threads)
endif()
Expand Down Expand Up @@ -291,8 +311,7 @@ if(BUILD_CUML_CPP_LIBRARY)

# single GPU components
# common components
add_library(${CUML_CPP_TARGET}
src/common/logger.cpp)
add_library(${CUML_CPP_TARGET})
if (CUML_ENABLE_GPU)
target_compile_definitions(${CUML_CPP_TARGET} PUBLIC CUML_ENABLE_GPU)
endif()
Expand Down Expand Up @@ -564,6 +583,7 @@ if(BUILD_CUML_CPP_LIBRARY)
PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:${CUML_CXX_FLAGS}>"
"$<$<COMPILE_LANGUAGE:CUDA>:${CUML_CUDA_FLAGS}>"
)
target_compile_definitions(${CUML_CPP_TARGET} PUBLIC "CUML_LOG_ACTIVE_LEVEL=CUML_LOG_LEVEL_${LIBCUML_LOGGING_LEVEL}")

target_include_directories(${CUML_CPP_TARGET}
PUBLIC
Expand Down Expand Up @@ -604,6 +624,7 @@ if(BUILD_CUML_CPP_LIBRARY)
raft::raft
rmm::rmm_logger_impl
raft::raft_logger_impl
cuml_logger_impl
$<TARGET_NAME_IF_EXISTS:GPUTreeShap::GPUTreeShap>
$<$<BOOL:${LINK_CUFFT}>:CUDA::cufft${_ctk_fft_static_suffix}>
${TREELITE_LIBS}
Expand All @@ -630,6 +651,7 @@ if(BUILD_CUML_CPP_LIBRARY)
target_link_libraries(${CUML_CPP_TARGET}
PUBLIC rmm::rmm rmm::rmm_logger ${CUVS_LIB}
${_cuml_cpp_public_libs}
cuml_logger
PRIVATE ${_cuml_cpp_private_libs}
)

Expand Down
7 changes: 4 additions & 3 deletions cpp/bench/sg/svc.cu
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2020-2024, NVIDIA CORPORATION.
* Copyright (c) 2020-2025, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -100,8 +100,9 @@ std::vector<SvcParams<D>> getInputs()
p.blobs.seed = 12345ULL;

// SvmParameter{C, cache_size, max_iter, nochange_steps, tol, verbosity})
p.svm_param = ML::SVM::SvmParameter{1, 200, 100, 100, 1e-3, CUML_LEVEL_INFO, 0, ML::SVM::C_SVC};
p.model = ML::SVM::SvmModel<D>{0, 0, 0, nullptr, {}, nullptr, 0, nullptr};
p.svm_param =
ML::SVM::SvmParameter{1, 200, 100, 100, 1e-3, ML::level_enum::info, 0, ML::SVM::C_SVC};
p.model = ML::SVM::SvmModel<D>{0, 0, 0, nullptr, {}, nullptr, 0, nullptr};

std::vector<Triplets> rowcols = {{50000, 2, 2}, {2048, 100000, 2}, {50000, 1000, 2}};

Expand Down
4 changes: 2 additions & 2 deletions cpp/bench/sg/svr.cu
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2020-2024, NVIDIA CORPORATION.
* Copyright (c) 2020-2025, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -102,7 +102,7 @@ std::vector<SvrParams<D>> getInputs()
// SvmParameter{C, cache_size, max_iter, nochange_steps, tol, verbosity,
// epsilon, svmType})
p.svm_param =
ML::SVM::SvmParameter{1, 200, 200, 100, 1e-3, CUML_LEVEL_INFO, 0.1, ML::SVM::EPSILON_SVR};
ML::SVM::SvmParameter{1, 200, 200, 100, 1e-3, ML::level_enum::info, 0.1, ML::SVM::EPSILON_SVR};
p.model = new ML::SVM::SvmModel<D>{0, 0, 0, 0};

std::vector<Triplets> rowcols = {{50000, 2, 2}, {1024, 10000, 10}, {3000, 200, 200}};
Expand Down
4 changes: 2 additions & 2 deletions cpp/examples/dbscan/dbscan_example.cpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2024, NVIDIA CORPORATION.
* Copyright (c) 2019-2025, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -209,7 +209,7 @@ int main(int argc, char* argv[])
nullptr,
max_bytes_per_batch,
ML::Dbscan::EpsNnMethod::BRUTE_FORCE,
false);
ML::level_enum::off);
CUDA_RT_CALL(cudaMemcpyAsync(
h_labels.data(), d_labels, nRows * sizeof(int), cudaMemcpyDeviceToHost, stream));
CUDA_RT_CALL(cudaStreamSynchronize(stream));
Expand Down
12 changes: 6 additions & 6 deletions cpp/include/cuml/cluster/dbscan.hpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2018-2024, NVIDIA CORPORATION.
* Copyright (c) 2018-2025, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand All @@ -16,7 +16,7 @@

#pragma once

#include <cuml/common/log_levels.hpp>
#include <cuml/common/logger.hpp>

#include <cuvs/distance/distance.hpp>

Expand Down Expand Up @@ -73,7 +73,7 @@ void fit(const raft::handle_t& handle,
float* sample_weight = nullptr,
size_t max_bytes_per_batch = 0,
EpsNnMethod eps_nn_method = BRUTE_FORCE,
int verbosity = CUML_LEVEL_INFO,
level_enum verbosity = ML::level_enum::info,
bool opg = false);
void fit(const raft::handle_t& handle,
double* input,
Expand All @@ -87,7 +87,7 @@ void fit(const raft::handle_t& handle,
double* sample_weight = nullptr,
size_t max_bytes_per_batch = 0,
EpsNnMethod eps_nn_method = BRUTE_FORCE,
int verbosity = CUML_LEVEL_INFO,
level_enum verbosity = ML::level_enum::info,
bool opg = false);

void fit(const raft::handle_t& handle,
Expand All @@ -102,7 +102,7 @@ void fit(const raft::handle_t& handle,
float* sample_weight = nullptr,
size_t max_bytes_per_batch = 0,
EpsNnMethod eps_nn_method = BRUTE_FORCE,
int verbosity = CUML_LEVEL_INFO,
level_enum verbosity = ML::level_enum::info,
bool opg = false);
void fit(const raft::handle_t& handle,
double* input,
Expand All @@ -116,7 +116,7 @@ void fit(const raft::handle_t& handle,
double* sample_weight = nullptr,
size_t max_bytes_per_batch = 0,
EpsNnMethod eps_nn_method = BRUTE_FORCE,
int verbosity = CUML_LEVEL_INFO,
level_enum verbosity = ML::level_enum::info,
bool opg = false);

/** @} */
Expand Down
4 changes: 1 addition & 3 deletions cpp/include/cuml/cluster/kmeans.hpp
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019-2024, NVIDIA CORPORATION.
* Copyright (c) 2019-2025, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand All @@ -16,8 +16,6 @@

#pragma once

#include <cuml/common/log_levels.hpp>

#include <cuvs/cluster/kmeans.hpp>

namespace raft {
Expand Down
37 changes: 0 additions & 37 deletions cpp/include/cuml/common/log_levels.hpp

This file was deleted.

Loading
Loading