-
-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to specify maximum memory a pony program is allowed to use #3282
base: main
Are you sure you want to change the base?
Conversation
src/libponyrt/mem/heap.c
Outdated
@@ -524,7 +525,12 @@ void ponyint_heap_used(heap_t* heap, size_t size) | |||
|
|||
bool ponyint_heap_startgc(heap_t* heap) | |||
{ | |||
if(heap->used <= heap->next_gc) | |||
// don't run GC if there's no heap used | |||
if(heap->used == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this really needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes and no. the check for heap->used == 0
needs to be done for when ponyint_pool_mem_pressure() == true
. I put it as a separate thing but I can instead add it later on as part of the ponyint_pool_mem_pressure()
condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this were to remain as is, I think an explanation of why it's needed would be important. This also makes me favor the "do this as capping next_gc" approach I put forth.
src/libponyrt/mem/heap.c
Outdated
return false; | ||
|
||
// don't run GC if haven't reached next threshold and aren't under memory pressure | ||
if((heap->used <= heap->next_gc) && !ponyint_pool_mem_pressure()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if instead of tackling in this fashion, we capped next_gc?
this might be a bad idea as it is overloading the meaning of next_gc otoh, it literally is what we are doing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I follow. How exactly would we cap next_gc
in relation to what ponyint_pool_mem_pressure()
checks for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of using "pool_mem_pressure" when setting a new next_gc
value, don't allow it to be set to more than the value which currently triggers true from ponyint_pool_mem_pressure
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, okey. Thanks for clarifying. I'm not sure that accomplishes the same thing. It is very likely some actors will get their next_gc
set prior to other actors allocating more memory that the next_gc
would be above the value that triggers true from ponyint_pool_mem_pressure
for the some actors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right. my idea makes no sense. i blame the migraine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
migraine no good. 8*/
{ | ||
// Make GC happen if pony has allocated more than 95% of the memory allowed | ||
// and pool_block_header cache is less than 5% of memory allowed | ||
// TODO: this currently does not account for any free blocks of memory tracked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think this should be a TODO.
I think a note that it doesn't take the free blocks into account is fine.
TODO says "we should change" which I don't believe is the purpose of this.
I think noting how it works is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can change it to a note. The reason I put it as a TODO
is because of the maybe it should?
part. I think the TODO
is the open question whether those blocks should be accounted for or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make a decision on if we want to account for free block as part of this PR if that is what you mean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if we can make a decision as part of the PR that would be great. If not, I would want the TODO
as a reminder that the decision needs to be made.
I'm not sure about this. Reason: The OS provides why to set a max for memory usage for a process. Why "duplicate" that in Pony runtime? I do like "once we hit X amount of memory, GC" so its more like an attempt to stay within a limit and less like an OOM killer or OS functionality for limiting memory usage. OTOH, the JVM does have a max heap size and will exit with an out of memory error if exceeded so it's not totally unheard of but I believe that was introduced before OSes had the level of resource control that exists now. |
That's a valid point. I added it because it was something I felt would be useful. I have no issues with not including this if the decision is that the OS functionality is sufficient and shouldn't be duplicated in pony. |
I have no strong opinions either way @dipinhora. I think it would be good to collect feedback from others. I would lean (slightly) towards "don't have the abort functionality". |
I like the idea of giving the user a way to specify a point where we increase the GC activity. I don't know if we need to tie it to killing the process. I'd be in favor of removing the process killing part and leaving that as something that can be handled by limiting the resources at the OS level or container level. If we need to add the killing back in for systems that don't support specifying resource limits then maybe we could do that as a separate flag. |
I don't have a lot of experience with the One thing that's missing (to my naïve eyes) is a limit on the number anonymous |
@slfritchie i don't either. not sure if there was a specific concern you wanted to raise but i wasn't able to understand it. can you please clarify? |
In the weekly Pony meeting today, @SeanTAllen wondered if existing setrlimit and perhaps other container'ish resource limit mechanisms were sufficient to limit memory growth in the same way that the JVM's heap size limit is (mostly) an effective ceiling. The anonymous mmap'ed regions are a thing that AFAIK don't have an OS-enforced limit, so ... my guess for answering Sean would be "no, they aren't sufficient". |
Thanks for clarifying. I'm not sure how anonymous mmap and rss interact in detail but my naive understanding is that pages only get counted as part of rss when they get loaded for read/write and before being loaded they are only counted as part of the virtual size. |
Two points:
|
@sylvanc Minor clarification:
Yes, this does add overhead for all programs regardless of whether they utilize the feature or not. Maybe it should be a compile time option instead? Or, if an additional branch in the block allocation code is acceptable, there can be a runtime check to only do the atomic operation if this option is specified on the command line. |
The compile-time option means needing an alternate libponyrt available to compile against, which is of course do-able, but seems unfortunate (and not sustainable for multiple features like this). I suspect an additional branch on block allocation would not have a noticeable performance impact, so that seems like an attractive option. Do we have a performance test that might stress block allocation? |
PR updated to add in the branch to decide if tracking memory allocated should occur or not. Additionally, added two more command line arguments: Disclaimer: Not run in a controlled environment. The following are some comparisons using The original
The original
The original
The modified
The modified
The modified
The modified
The modified
The modified
|
c9692c9
to
403f428
Compare
Prior to this commit, a pony program would keep growing in terms of memory use until the OS would be unable to allocate more virtual memory. This commit adds a new `--ponymaxmem` command line argument to limit the maximum amount of dynamically allocated memory a pony program is allowed to use before the runtime refuses to allocate any more. The limit enforced is in MB and does not exactly match the RSS of a process in linux due to memory allocated by the process outside of the pony pool allocator.
This commit adds two new command line arguments to allow for enabling aggressive GC when the program is using more than a set threshold of memory. The new command line options are `--ponyaggressivegcthreshold` and `--ponyaggressivegcfactor`.
This commit enhances the message-ubench example application to be able to send arrays of arbitrary sizes as part of pings from one actor to another. It also includes the ability to specify how long a receiving actor should hold on to arrays before freeing them for GC.
Prior to this commit, a pony program would keep growing in terms of
memory use until the OS would be unable to allocate more virtual
memory.
This commit adds a new
--ponymaxmem
command line argument to limitthe maximum amount of dynamically allocated memory a pony program is
allowed to use before the runtime refuses to allocate any more. The
limit enforced is in MB and does not exactly match the RSS of a
process in linux due to memory allocated by the process outside of
the pony pool allocator. Additionally, GC is now triggered for actors
if the application is close to having allocated all of the memory it
is allowed and it doesn't have much free in the pool allocator itself.
The following program:
Uses about 210 MB when run:
Results in an error when limited to 100 MB with the new
--ponymaxmem
option once it tries to allocated more than it is allowed: