You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is what I get in response to running the training code: C:\Users\AR\Desktop\marlin\MARLIN>python train.py --config config/pretrain/marlin_vit_base.yaml --data_dir C:\Users\AR\Desktop\marlin\MARLIN\trainingData\YouTubeFaces --n_gpus 1 --num_workers 8 --batch_size 16 --epochs 2000 --official_pretrained C:\Users\AR\Desktop\marlin\MARLIN\videomae\checkpoint_vitb.pth _IncompatibleKeys(missing_keys=['encoder.pos_embedding.emb', 'decoder.pos_embedding.emb', 'discriminator.layers.0.linear.weight', 'discriminator.layers.0.linear.bias', 'discriminator.layers.1.linear.weight', 'discriminator.layers.1.linear.bias'], unexpected_keys=[]) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Missing logger folder: C:\Users\AR\Desktop\marlin\MARLIN\lightning_logs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Traceback (most recent call last): File "C:\Users\AR\Desktop\marlin\MARLIN\train.py", line 141, in <module> trainer.fit(model, dm) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 696, in fit self._call_and_handle_interrupt( File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 735, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1147, in _run self.strategy.setup(self) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\single_device.py", line 74, in setup super().setup(trainer) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 153, in setup self.setup_optimizers(trainer) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 141, in setup_optimizers self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers( File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\core\optimizer.py", line 194, in _init_optimizers_and_lr_schedulers _validate_scheduler_api(lr_scheduler_configs, model) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\core\optimizer.py", line 351, in _validate_scheduler_api raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler LambdaLRdoesn't follow PyTorch's LRScheduler API. You should override theLightningModule.lr_scheduler_step hook with your own logic if you are using a custom LR scheduler.
I am a beginner with this stuff, so please be forgiving towards my ignorance.
The text was updated successfully, but these errors were encountered:
Here is what I get in response to running the training code:
C:\Users\AR\Desktop\marlin\MARLIN>python train.py --config config/pretrain/marlin_vit_base.yaml --data_dir C:\Users\AR\Desktop\marlin\MARLIN\trainingData\YouTubeFaces --n_gpus 1 --num_workers 8 --batch_size 16 --epochs 2000 --official_pretrained C:\Users\AR\Desktop\marlin\MARLIN\videomae\checkpoint_vitb.pth _IncompatibleKeys(missing_keys=['encoder.pos_embedding.emb', 'decoder.pos_embedding.emb', 'discriminator.layers.0.linear.weight', 'discriminator.layers.0.linear.bias', 'discriminator.layers.1.linear.weight', 'discriminator.layers.1.linear.bias'], unexpected_keys=[]) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Missing logger folder: C:\Users\AR\Desktop\marlin\MARLIN\lightning_logs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Traceback (most recent call last): File "C:\Users\AR\Desktop\marlin\MARLIN\train.py", line 141, in <module> trainer.fit(model, dm) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 696, in fit self._call_and_handle_interrupt( File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 735, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1147, in _run self.strategy.setup(self) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\single_device.py", line 74, in setup super().setup(trainer) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 153, in setup self.setup_optimizers(trainer) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\strategies\strategy.py", line 141, in setup_optimizers self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers( File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\core\optimizer.py", line 194, in _init_optimizers_and_lr_schedulers _validate_scheduler_api(lr_scheduler_configs, model) File "C:\Users\AR\AppData\Local\Programs\Python\Python39\lib\site-packages\pytorch_lightning\core\optimizer.py", line 351, in _validate_scheduler_api raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler
LambdaLRdoesn't follow PyTorch's LRScheduler API. You should override the
LightningModule.lr_scheduler_stephook with your own logic if you are using a custom LR scheduler.
I am a beginner with this stuff, so please be forgiving towards my ignorance.
The text was updated successfully, but these errors were encountered: