Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'DiscourseCrfClassifier' object has no attribute 'classifier_feedforward' #24

Open
pivettamarcos opened this issue May 13, 2020 · 1 comment

Comments

@pivettamarcos
Copy link

So I tried running your transfer_learning_crf.py script and it throws back this error:

AttributeError                            Traceback (most recent call last)
<ipython-input-2-fd6f08fec54e> in <module>
     47     num_classes, constraints, include_start_end_transitions = 2, None, False
     48     model.classifier_feedforward._linear_layers = ModuleList([torch.nn.Linear(2 * EMBEDDING_DIM, EMBEDDING_DIM), 
---> 49                                                               torch.nn.Linear(EMBEDDING_DIM, num_classes)])
     50     model.crf = ConditionalRandomField(num_classes, constraints, 
     51                                        include_start_end_transitions=include_start_end_transitions)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
    592                 return modules[name]
    593         raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594             type(self).__name__, name))
    595 
    596     def __setattr__(self, name, value):

AttributeError: 'DiscourseCrfClassifier' object has no attribute 'classifier_feedforward'

In fact, DiscourseCrfClassifier doesn't have that attribute, as it was removed in an earlier commit.

I tried commenting the line that tries to use the attribute, but it then gives em a different error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-1-331d1897d4d1> in <module>
    102         cuda_device=-1
    103     )
--> 104     trainer.train()
    105 
    106     # unfreeze most layers and continue training

~/anaconda3/lib/python3.7/site-packages/allennlp/training/trainer.py in train(self)
    476         for epoch in range(epoch_counter, self._num_epochs):
    477             epoch_start_time = time.time()
--> 478             train_metrics = self._train_epoch(epoch)
    479 
    480             # get peak of memory usage

~/anaconda3/lib/python3.7/site-packages/allennlp/training/trainer.py in _train_epoch(self, epoch)
    318             self.optimizer.zero_grad()
    319 
--> 320             loss = self.batch_loss(batch_group, for_training=True)
    321 
    322             if torch.isnan(loss):

~/anaconda3/lib/python3.7/site-packages/allennlp/training/trainer.py in batch_loss(self, batch_group, for_training)
    259             batch = batch_group[0]
    260             batch = nn_util.move_to_device(batch, self._cuda_devices[0])
--> 261             output_dict = self.model(**batch)
    262 
    263         try:

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/media/sf_COVID19KTool/COVID19KTool/discourse/models/discourse_crf_model.py in forward(self, sentences, labels)
     77 
     78         # CRF prediction
---> 79         logits = self.label_projection_layer(encoded_sentences) # size: (n_batch, n_sents, n_classes)
     80         best_paths = self.crf.viterbi_tags(logits, sentence_masks)
     81         predicted_labels = [x for x, y in best_paths]

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/allennlp/modules/time_distributed.py in forward(self, pass_through, *inputs, **kwargs)
     49             reshaped_kwargs[key] = value
     50 
---> 51         reshaped_outputs = self._module(*reshaped_inputs, **reshaped_kwargs)
     52 
     53         if some_input is None:

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/linear.py in forward(self, input)
     85 
     86     def forward(self, input):
---> 87         return F.linear(input, self.weight, self.bias)
     88 
     89     def extra_repr(self):

~/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in linear(input, weight, bias)
   1608     if input.dim() == 2 and bias is not None:
   1609         # fused op is marginally faster
-> 1610         ret = torch.addmm(bias, input, weight.t())
   1611     else:
   1612         output = input.matmul(weight.t())

RuntimeError: size mismatch, m1: [640 x 600], m2: [400 x 2] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:41

Am I doing something wrong? And could you update the transfer learning script to fit this change?

Thanks.

@Shiyun-W
Copy link

Hi, I have faced the same problem, I would like to ask if you have solved the problem? Could you please tell me how to solve it? Thank you very much if you could help me to solve this problem!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants