-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RESOURCE_EXHAUSTED: Received message larger than max (1752460652 vs 4194304) #240
Comments
hi @wozjac thanks for reporting. i've experienced this as well and am currently in the process of clarifying the issue together with the colleagues from cloud logging. best, |
hi @wozjac i was able to resolve my issue. however, best, |
Hi @sjvans thanks for checking, we had to disable the plugin, as all logs are covered with this message. Is there any switch we can use to track what/why causes so big data input? Best Regards |
hi @wozjac it shouldn't be the metrics but the traces. cf. which version of grpc-js are you using? there is this issue report with >= 1.10.9: grpc/grpc-node#2822 best, |
To me this looks like a client library issue which is independent of SAP Cloud Logging. It just happens when sending is configured to any destination. I am not even sure if the request is actually tried or if it breaks before. Even if everything would be working as designed on CAP side, 4 megabyte is a common upper limit for single requests which we would not change for SAP Cloud Logging. Good luck in fixing the issue. Best, Jürgen |
Hi,
we receive such error in the log:
{"stack":"Error: 8 RESOURCE_EXHAUSTED: Received message larger than max (1752460652 vs 4194304)\n at callErrorFromStatus (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/call.js:31:19)\n at Object.onReceiveStatus (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/client.js:193:76)\n at Object.onReceiveStatus (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:360:141)\n at Object.onReceiveStatus (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:323:181)\n at /home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/resolving-call.js:129:78\n at process.processTicksAndRejections (node:internal/process/task_queues:77:11)\nfor call at\n at ServiceClientImpl.makeUnaryRequest (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/client.js:161:32)\n at ServiceClientImpl.export (/home/vcap/deps/0/node_modules/@grpc/grpc-js/build/src/make-client.js:105:19)\n at /home/vcap/deps/0/node_modules/@opentelemetry/otlp-grpc-exporter-base/build/src/grpc-exporter-transport.js:98:32\n at new Promise ()\n at GrpcExporterTransport.send (/home/vcap/deps/0/node_modules/@opentelemetry/otlp-grpc-exporter-base/build/src/grpc-exporter-transport.js:87:16)\n at OTLPTraceExporter.send (/home/vcap/deps/0/node_modules/@opentelemetry/otlp-grpc-exporter-base/build/src/OTLPGRPCExporterNodeBase.js:87:14)\n at /home/vcap/deps/0/node_modules/@opentelemetry/otlp-exporter-base/build/src/OTLPExporterBase.js:77:22\n at new Promise ()\n at OTLPTraceExporter._export (/home/vcap/deps/0/node_modules/@opentelemetry/otlp-exporter-base/build/src/OTLPExporterBase.js:74:16)\n at OTLPTraceExporter.export (/home/vcap/deps/0/node_modules/@opentelemetry/otlp-exporter-base/build/src/OTLPExporterBase.js:65:14)","message":"8 RESOURCE_EXHAUSTED: Received message larger than max (1752460652 vs 4194304)","code":"8","details":"Received message larger than max (1752460652 vs 4194304)","metadata":"[object Object]","name":"Error"}
The configuration is:
We don't use any custom metrics, just the default setup.
What is interesting, that this happens only in 2 out from 3 of our subaccounts.
How to track what might be the cause?
Best Regars,
Jacek
The text was updated successfully, but these errors were encountered: