memory hogging #214
-
Describe the bug
Reproduction steps
|
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
Hello, we would really appreciate it if you would explain the new test a bit more, and hopefully even provide a fix for it. |
Beta Was this translation helpful? Give feedback.
-
I did some investigation, but it was disappointing as I couldn't pinpoint the root cause. My apologies, my expertise lies with C, not Go. In my view, a basic parser shouldn't exhaust all the RAM simply for parsing string data. If this were a server, I could have shown you a magic trick, but it's a client. |
Beta Was this translation helpful? Give feedback.
-
Sure, but how would invalid data make it to the parser in a real-world client application? |
Beta Was this translation helpful? Give feedback.
-
While it might not be a real-world problem, it's an issue for me as the high memory usage prevents me from conducting proper fuzz testing. test code : func TestMemoryHog(t *testing.T) {
input_data := []byte("\x01000000\x00(\x00\x1e\x00\x00\x00\x00\x04\x00\x0400\xf3q\x19\x19\x19\x19")
for i := 0; i < 5000; i++ {
r := reader{bytes.NewReader(input_data)}
_, _ = r.ReadFrame()
}
} run : Problem: During the execution of this test, the memory usage was approximately |
Beta Was this translation helpful? Give feedback.
-
Just out of curiosity, why are you doing this testing on this library? Research? |
Beta Was this translation helpful? Give feedback.
#215
If anything the new fuzz test shows that this library can be made to allocate huge amounts of memory in the case of malformed input data. I tried limiting one allocation to 65KiB and it did help a bit with the fuzz test, but some workers' memory use was still high.
This is an interesting problem and may be one we'd like to look into when we have time, but it will be low priority.
cc @Zerpet @Gsantomaggio