Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory during rendering process #9

Open
wcjj1236 opened this issue Jun 9, 2024 · 2 comments
Open

CUDA out of memory during rendering process #9

wcjj1236 opened this issue Jun 9, 2024 · 2 comments

Comments

@wcjj1236
Copy link

wcjj1236 commented Jun 9, 2024

Hey do you know how to deal with CUDA out of memory during rendering process?

encoding_param_num=1766272, size=0.2105560302734375MB. [08/06 19:57:10] Reading camera 251/251 [08/06 19:57:11] start fetching data from ply file [08/06 19:57:11] Loading Training Cameras [08/06 19:57:11] Loading Test Cameras [08/06 19:57:15] Initial voxel_size: 0.01 [08/06 19:57:16] Number of points at initialisation : 114293 [08/06 19:57:16] anchor_bound_updated [08/06 19:57:16] Training progress: 100%|███████████████████████████████████████████| 1000/1000 [01:00<00:00, 16.49it/s, Loss=0.0915876] 2024-06-08 19:58:17,128 - INFO: [ITER 1000] Saving Gaussians 2024-06-08 19:58:18,246 - INFO: Total Training time: 60.38757371902466 2024-06-08 19:58:18,351 - INFO: Training complete. 2024-06-08 19:58:18,351 - INFO: Starting Rendering~ hash_params: True 4 13 (18, 24, 33, 44, 59, 80, 108, 148, 201, 275, 376, 514) 15 (130, 258, 514, 1026) True False False [08/06 19:58:18] encoding_param_num=1766272, size=0.2105560302734375MB. [08/06 19:58:18] Loading trained model at iteration 1000 [08/06 19:58:18] Reading camera 251/251 [08/06 19:58:19] start fetching data from ply file [08/06 19:58:19] Loading Training Cameras [08/06 19:58:19] Loading Test Cameras [08/06 19:58:22] Rendering progress: 0%| | 0/32 [00:01<?, ?it/s] Traceback (most recent call last): File "train.py", line 669, in <module> visible_count = render_sets(args, lp.extract(args), -1, pp.extract(args), wandb=wandb, logger=logger, x_bound_min=x_bound_min, x_bound_max=x_bound_max) File "train.py", line 472, in render_sets t_test_list, visible_count = render_set(dataset.model_path, "test", scene.loaded_iter, scene.getTestCameras(), gaussians, pipeline, background) File "train.py", line 403, in render_set render_pkg = render(view, gaussians, pipeline, background, visible_mask=voxel_visible_mask) File "C:\Users\jay\Desktop\HAC\gaussian_renderer\__init__.py", line 225, in render cov3D_precomp = None) File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 222, in forward raster_settings, File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 41, in rasterize_gaussians raster_settings, File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 92, in forward num_rendered, color, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args) RuntimeError: CUDA out of memory. Tried to allocate 8.54 GiB (GPU 0; 8.00 GiB total capacity; 147.41 MiB already allocated; 5.60 GiB free; 264.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@YihangChen-ee
Copy link
Owner

Hi, could you please check your GPU type? For reference, we use 4090 GPU which has a memory of 24 GB.

BTW, OOM is also related to the scene scale.

@wcjj1236
Copy link
Author

wcjj1236 commented Jun 10, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants