Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train_data samples: '0' #43

Open
kg571852741 opened this issue Aug 8, 2023 · 2 comments
Open

train_data samples: '0' #43

kg571852741 opened this issue Aug 8, 2023 · 2 comments

Comments

@kg571852741
Copy link

train_data samples: '0'

Hi, I have an issue with loading the data when I initiate the model. The train_data samples show '0'. I wonder if my training dataset root config is wrong or if there is another cause for this issue.

SemanticKITTI data

(SphereFormer) bim-group@bimgroup-MS-7D70:~/Documents/GitHub/lightning-hydra-template/SphereFormer$ ls -p data/SemanticKITTI/dataset/sequences/
00/ 01/ 02/ 03/ 04/ 05/ 06/ 07/ 08/ 09/ 10/

Error_logs:

'Traceback (most recent call last): File "train.py", line 908, in <module> main() File "train.py", line 92, in main main_worker(args.train_gpu, args.ngpus_per_node, args) File "train.py", line 294, in main_worker collate_fn=collate_fn File "/home/bim-group/anaconda3/envs/SphereFormer/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 266, in __init__ sampler = RandomSampler(dataset, generator=generator) # type: ignore File "/home/bim-group/anaconda3/envs/SphereFormer/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 104, in __init__ "value, but got num_samples={}".format(self.num_samples)) ValueError: num_samples should be a positive integer value, but got num_samples=0'

Log.txt

[08/0907:31:59] main-logger INFO: a: 0.0125 arch: unet_spherical_transformer aug: True base_lr: 0.006 batch_size: 4 batch_size_val: 4 block_reps: 2 block_residual: True class_weight: [3.1557, 8.7029, 7.8281, 6.1354, 6.3161, 7.9937, 8.9704, 10.1922, 1.6155, 4.2187, 1.9385, 5.5455, 2.0198, 2.6261, 1.3212, 5.1102, 2.5492, 5.8585, 7.3929] classes: 19 data_name: semantic_kitti data_root: data/SemanticKITTI/dataset dist_backend: nccl dist_url: tcp://127.0.0.1:6789 distributed: False drop_path_rate: 0.3 drop_rate: 0.5 epochs: 50 eval_freq: 1 evaluate: True fea_dim: 6 grad_checkpoint_layers: [] ignore_label: 255 input_c: 4 label_mapping: util/semantic-kitti.yaml layers: [32, 64, 128, 256, 256] loss_name: ce_loss m: 32 manual_seed: 123 max_batch_points: 1000000 momentum: 0.9 multiprocessing_distributed: False ngpus_per_node: 2 patch_size: [0.05 0.05 0.05] pc_range: [[-51.2, -51.2, -4], [51.2, 51.2, 2.4]] power: 0.9 print_freq: 10 quant_size_scale: 24 rank: 0 rel_key: True rel_query: True rel_value: True resume: None save_freq: 1 save_path: runs/semantic_kitti_unet32_spherical_transformer scheduler: Poly scheduler_update: step sphere_layers: [1, 2, 3, 4, 5] start_epoch: 0 sync_bn: True train_gpu: [0, 1] transformer_lr_scale: 0.1 use_amp: True use_tta: False use_xyz: True val: False vote_num: 4 voxel_max: 120000 voxel_size: [0.05, 0.05, 0.05] weight: None weight_decay: 0.02 window_size: 6 window_size_scale: [2.0, 1.5] window_size_sphere: [2, 2, 80] workers: 32 world_size: 1 xyz_norm: False [08/09 07:32:00] main-logger INFO: => creating model ... [08/09 07:32:00] main-logger INFO: Classes: 19 ... [08/09 07:32:01] main-logger INFO: #Model parameters: 32311715 [08/09 07:32:01] main-logger INFO: class_weight: tensor([ 3.1557, 8.7029, 7.8281, 6.1354, 6.3161, 7.9937, 8.9704, 10.1922, 1.6155, 4.2187, 1.9385, 5.5455, 2.0198, 2.6261, 1.3212, 5.1102, 2.5492, 5.8585, 7.3929], device='cuda:0') [08/09 07:32:01] main-logger INFO: loss_name: ce_loss [08/09 07:32:01] main-logger INFO: train_data samples: '0'

@X-Lai
Copy link
Collaborator

X-Lai commented Aug 9, 2023

I suspect whether a wrong data path is passed. Can you double check it?

@ccdont
Copy link

ccdont commented Mar 15, 2024

Hi, I had the same problem, have you solved it yet?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants