You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a side question which popped up when investigating the issue #675 . It is regarding the 'real' memory consumption of nsmgr container and how this relates to the memory usage of it's single process.
When I checked the memory usage of nsmgr process different values were showed by different tools.
I did not relayed to the 'ps' command output, but checked the 'smaps' in /proc directory (same as pmap command). Another source was the go's profiling (pprof) tool which shows much lower memory used by nsmgr process.
> go tool pprof -top memprofile-15:20:53-4287106497 | head -4
File: nsmgr
Type: inuse_space
Time: May 7, 2024 at 5:20pm (CEST)
Showing nodes accounting for 2581.36kB, 100% of 2581.36kB total
There are different values reported for RSS by pmap (29524 kilobytes), by 'ps' (28904 kilobytes) and by pprof (2581.36kB).
The only process running on this container is the nsmgr process. The grpc probes are removed on this deployment.
The kubectl top reported an 16 Mi bytes which almost the same as systemct and memory.current file shows:
Later the RSS values showed by ps and pmap command showing much lower memory consumption than cgroup's memory.current and 'kubectl top'. What should cause the difference? What else can count to container memory usage than nsmgr process (CRI, kubelet)?
The text was updated successfully, but these errors were encountered:
This is a side question which popped up when investigating the issue #675 . It is regarding the 'real' memory consumption of nsmgr container and how this relates to the memory usage of it's single process.
When I checked the memory usage of nsmgr process different values were showed by different tools.
I did not relayed to the 'ps' command output, but checked the 'smaps' in /proc directory (same as pmap command). Another source was the go's profiling (pprof) tool which shows much lower memory used by nsmgr process.
Pmap and smaps shown a 29524 kilobytes
The pprof tool gives the following output:
There are different values reported for RSS by pmap (29524 kilobytes), by 'ps' (28904 kilobytes) and by pprof (2581.36kB).
The only process running on this container is the nsmgr process. The grpc probes are removed on this deployment.
The kubectl top reported an 16 Mi bytes which almost the same as systemct and memory.current file shows:
Later the RSS values showed by ps and pmap command showing much lower memory consumption than cgroup's memory.current and 'kubectl top'. What should cause the difference? What else can count to container memory usage than nsmgr process (CRI, kubelet)?
The text was updated successfully, but these errors were encountered: