Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pickling error from multiprocessing when running eval.py in algolisp #16

Open
natashabutt opened this issue Mar 18, 2022 · 0 comments
Open

Comments

@natashabutt
Copy link

After training the model with train.py, I'm getting the below pickling error when running eval.py. The error seems to be coming from the use of the multiprocessing package with the executor module. Please could you advise on this?

python program_synthesis/algolisp/eval.py --model_type=seq2seq --model_dir=models/seq2seq --no-cuda
Evaluation:
        Model type: seq2seq
        Model path: models/seq2seq
Loaded /data/generated/metaset3.dev.jsonl, total records: 10819
Loaded vocab /data/generated/word.vocab: 331
Loaded vocab /data/generated/word.vocab: 331
Spec2Seq(
  (input_embed): Embedding(351, 500)
  (output_embed): Embedding(351, 500)
  (encoder): SpecEncoder(
    (text_encoder): SequenceEncoder(
      (embed): Embedding(351, 500)
      (encoder): GRU(500, 500, batch_first=True)
    )
    (proj): Linear(in_features=500, out_features=500, bias=True)
  )
  (decoder): SeqDecoderAttn(
    (embed): Embedding(351, 500)
    (decoder): StackedGRU(
      (dropout): Dropout(p=0.2, inplace=False)
      (layers): ModuleList(
        (0): GRUCell(500, 500)
      )
    )
    (out): Linear(in_features=500, out_features=371, bias=False)
    (attention): DotProductAttention(
      (param_0): Linear(in_features=500, out_features=500, bias=False)
      (param_1): Linear(in_features=1500, out_features=500, bias=False)
    )
    (decoder_proj): Linear(in_features=1000, out_features=500, bias=True)
  )
)
Loading model from models/seq2seq/checkpoint
  0%|                                                                                                                                                                                                          | 0/10819 [00:00<?, ?it/s]/program_synthesis/algolisp/models/prepare_spec.py:30: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  return Variable(t, volatile=volatile)
/program_synthesis/common/modules/decoders.py:183: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  [0 for _ in range(batch_size)]), volatile=True)
/program_synthesis/common/modules/decoders.py:191: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  last_input = Variable(ids, volatile=True)
  0%|                                                                                                                                                                                                          | 0/10819 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "program_synthesis/algolisp/eval.py", line 98, in <module>
    evaluate(args)
  File "program_synthesis/algolisp/eval.py", line 80, in evaluate
    for stats in evaluation.run_inference(eval_dataset, m, current_executor):
  File "/program_synthesis/algolisp/dataset/evaluation.py", line 61, in run_inference
    for stats in model.worker_pool.imap(get_stats_from_code, zip(results, batch, [executor_]*len(batch))):
  File "/usr/src/Python-3.5.9/Lib/multiprocessing/pool.py", line 731, in next
    raise value
  File "/usr/src/Python-3.5.9/Lib/multiprocessing/pool.py", line 424, in _handle_tasks
    put(task)
  File "/usr/src/Python-3.5.9/Lib/multiprocessing/connection.py", line 206, in send
    self._send_bytes(ForkingPickler.dumps(obj))
  File "/usr/src/Python-3.5.9/Lib/multiprocessing/reduction.py", line 50, in dumps
    cls(buf, protocol).dump(obj)
AttributeError: Can't pickle local object 'load_default_lisp_units.<locals>.<lambda>'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant