You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to use your classifier but encountered a couple of problems:
In your README line wget https://s3-us-west-2.amazonaws.com/pubmed-rct/model.tar.gz downloads the model to the main dir, not output/ and the later commands assume model will be in the output dir.
The commands (and web_service.sh) all use discourse_classifier as a predictor, where I believe they should use discourse_predictor instead. discourse_classifier gives error allennlp.common.checks.ConfigurationError: 'discourse_classifier is not a registered name for Predictor'
As long as I was able to detect the previous problems quite quickly and fixing them was trivial, I stumbled across a problem I couldn't overcome: The commands for starting the web server do not start it for me. I get
(p37) detecting-scientific-claim$ flask run --host=0.0.0.0
* Serving Flask app "main.py"
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
DeprecationWarning)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 355287325/355287325 [01:43<00:00, 3420728.39B/s]
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:54: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 381931118/381931118 [01:36<00:00, 3946648.30B/s]
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
or
(p37) student@gpunode2:~/byczynskaa/detecting-scientific-claim$ bash web_service.sh
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
DeprecationWarning)
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:54: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
Model loaded, serving demo on port 8000
But when I go the the localhost:port, I get This site can’t be reached :(
Also, since I downloaded the latest allennlp the flags might have changed, and the evaluate command should be of the form allennlp evaluate model.tar.gz https://s3-us-west-2.amazonaws.com/pubmed-rct/test.json --include-package discourse
Could you help me fix the issue 3.?
Thanks!
The text was updated successfully, but these errors were encountered:
yeah, yeah, I tried both ports, accessing http://localhost:5000 and http://localhost:8000 and also using the --port flag to change it to some other port but it always says the same thing. Thanks!
Hi @olabknbit I can successfully run the Flask script on my laptop and the Ubuntu laptop. My Flask version is 0.12.4 and my allennlp version is 0.7.1. Maybe you can try running simple Flask script such as in http://flask.pocoo.org/ first to see if it works? I will try to see if I can reproduce an error soon.
I've been trying to use your classifier but encountered a couple of problems:
wget https://s3-us-west-2.amazonaws.com/pubmed-rct/model.tar.gz
downloads the model to the main dir, notoutput/
and the later commands assume model will be in theoutput
dir.discourse_classifier
as a predictor, where I believe they should usediscourse_predictor
instead.discourse_classifier
gives errorallennlp.common.checks.ConfigurationError: 'discourse_classifier is not a registered name for Predictor'
or
But when I go the the localhost:
port
, I getThis site can’t be reached
:(allennlp
the flags might have changed, and the evaluate command should be of the formallennlp evaluate model.tar.gz https://s3-us-west-2.amazonaws.com/pubmed-rct/test.json --include-package discourse
Could you help me fix the issue 3.?
Thanks!
The text was updated successfully, but these errors were encountered: