Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InvalidArgumentError: indices[40] = 2000 0 is not in [0, 20000) #82

Open
yellowbirdwithme opened this issue Mar 26, 2018 · 2 comments
Open

Comments

@yellowbirdwithme
Copy link

yellowbirdwithme commented Mar 26, 2018

I was running the monkut version (https://github.com/monkut/tensorflow_chatbot) on my windows7 with python 3.5 and tensorflow r0.12 cpu, and after just 300 steps an error occured. Then I tried to change the vocabulary size to 30000 and set a checkpiont every 100 steps. With 1 layer of 128 units the error occured after 3900 steps and with 3 layers of 256 units it occured after 5400 steps.
What kind of error is that? Is there a way to solve it?

Mode : train
Preparing data in working_dir/
Creating vocabulary working_dir/vocab20000.enc from data/train.enc
processing line 100000
Full Vocabulary Size : 45408
Vocab Truncated to: 20000
Creating vocabulary working_dir/vocab20000.dec from data/train.dec
processing line 100000
Full Vocabulary Size : 44271
Vocab Truncated to: 20000
Tokenizing data in data/train.enc
tokenizing line 100000
Tokenizing data in data/train.dec
tokenizing line 100000
Tokenizing data in data/test.enc
Creating 3 layers of 256 units.
Created model with fresh parameters.
Reading development and training data (limit: 0).
reading data line 100000
global step 300 learning rate 0.5000 step-time 3.34 perplexity 377.45
eval: bucket 0 perplexity 96.25
eval: bucket 1 perplexity 210.94
eval: bucket 2 perplexity 267.86
eval: bucket 3 perplexity 365.77
Traceback (most recent call last):
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 1021, in _do_call
return fn(*args)
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 1003, in _run_fn
status, run_metadata)
File "C:\Python35 64\lib\contextlib.py", line 66, in exit
next(self.gen)
File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[40] = 20000 is not in [0, 20000)
[[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/concat)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "execute.py", line 352, in
train()
File "execute.py", line 180, in train
target_weights, bucket_id, False)
File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflow_chatbot-master\seq2seq_model.py", line 230, in step
outputs = session.run(output_feed, input_feed)
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 766, in run
run_metadata_ptr)
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[40] = 20000 is not in [0, 20000)
[[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/concat)]]

Caused by op 'model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/embedding_lookup_1', defined at:
File "execute.py", line 352, in
train()
File "execute.py", line 148, in train
model = create_model(sess, False)
File "execute.py", line 109, in create_model
gConfig['learning_rate_decay_factor'], forward_only=forward_only)
File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflow_chatbot-master\seq2seq_model.py", line 158, in init
softmax_loss_function=softmax_loss_function)
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line 1130, in model_with_buckets
softmax_loss_function=softmax_loss_function))
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line 1058, in sequence_loss
softmax_loss_function=softmax_loss_function))
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line 1022, in sequence_loss_by_example
crossent = softmax_loss_function(logit, target)
File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflow_chatbot-master\seq2seq_model.py", line 101, in sampled_loss
self.target_vocab_size)
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\nn.py", line 1412, in sampled_softmax_loss
name=name)
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\nn.py", line 1184, in _compute_sampled_logits
all_b = embedding_ops.embedding_lookup(biases, all_ids)
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\embedding_ops.py", line 110, in embedding_lookup
validate_indices=validate_indices)
File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 1293, in gather
validate_indices=validate_indices, name=name)
File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 759, in apply_op
op_def=op_def)
File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\ops.py", line 2240, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\ops.py", line 1128, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): indices[40] = 20000 is not in [0, 20000)
[[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/replica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_loss_by_example/sampled_softmax_loss_28/concat)]]

@alznn
Copy link

alznn commented Jul 31, 2018

Hi @yellowbirdwithme
I recently encountered similar errors
May I know do you solve it ?

@yellowbirdwithme
Copy link
Author

Hi @alznn
It seems using virtualenv and tensorflow-gpu 0.12.0 solves the problem for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants