You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
thanks for your great work. I have some training data questions, especially semanticKITTI
pre_train: I found you use the train data like 00 to 10 except 08. not the eval 08 and test data 11-21. Do I understand correctly? because it is reasonable
downstream train data. I found you use the datasets/percentiles_split.json to load the data. How is this json generated? is it randomly? As one simple way is use the python slicing, for example, if the train ration is 1%, and the skip ratio is 100, so man can easily use train_data[::100] to get it. Why you use a json file to load it?
The text was updated successfully, but these errors were encountered:
hi authors,
thanks for your great work. I have some training data questions, especially semanticKITTI
pre_train: I found you use the train data like 00 to 10 except 08. not the eval 08 and test data 11-21. Do I understand correctly? because it is reasonable
downstream train data. I found you use the datasets/percentiles_split.json to load the data. How is this json generated? is it randomly? As one simple way is use the python slicing, for example, if the train ration is 1%, and the skip ratio is 100, so man can easily use train_data[::100] to get it. Why you use a json file to load it?
The text was updated successfully, but these errors were encountered: