From 95f31c182f7ed1493d613e8e3731b8cb466c553b Mon Sep 17 00:00:00 2001 From: "Shunting Zhang (Meta Employee)" Date: Sun, 31 Mar 2024 18:47:26 -0700 Subject: [PATCH] tune down batch-size for res2net to avoid OOM (#122977) Summary: The batch-size for this model is 64 previously. Later on we change that to 256 and cause OOM in cudagraphs setting. This PR tune the batch size down to 128. Share more logs from my local run ``` cuda,res2net101_26w_4s,128,1.603578,110.273572,335.263494,1.042566,11.469964,11.001666,807,2,7,6,0,0 cuda,res2net101_26w_4s,256,1.714980,207.986155,344.013071,1.058278,22.260176,21.034332,807,2,7,6,0,0 ``` The log shows that torch.compile uses 11GB for 128 batch size and 21GB for 256 batch size. I guess the benchmark script has extra overhead cause the model OOM for 256 batch size in the dashboard run. X-link: https://github.com/pytorch/pytorch/pull/122977 Approved by: https://github.com/Chillee Reviewed By: atalman Differential Revision: D55561255 Pulled By: shunting314 fbshipit-source-id: 9863e86776d8ed30397806bda330f53c9815f61e --- userbenchmark/dynamo/dynamobench/timm_models_list.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/userbenchmark/dynamo/dynamobench/timm_models_list.txt b/userbenchmark/dynamo/dynamobench/timm_models_list.txt index 91d897d32f..0c13a8cb1d 100644 --- a/userbenchmark/dynamo/dynamobench/timm_models_list.txt +++ b/userbenchmark/dynamo/dynamobench/timm_models_list.txt @@ -39,7 +39,7 @@ pnasnet5large 32 poolformer_m36 128 regnety_002 1024 repvgg_a2 128 -res2net101_26w_4s 256 +res2net101_26w_4s 128 res2net50_14w_8s 128 res2next50 128 resmlp_12_224 128