Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trouble running web_service.sh and following examples from README #20

Open
olabknbit opened this issue Jun 6, 2019 · 3 comments
Open

Comments

@olabknbit
Copy link

I've been trying to use your classifier but encountered a couple of problems:

  1. In your README line wget https://s3-us-west-2.amazonaws.com/pubmed-rct/model.tar.gz downloads the model to the main dir, not output/ and the later commands assume model will be in the output dir.
  2. The commands (and web_service.sh) all use discourse_classifier as a predictor, where I believe they should use discourse_predictor instead. discourse_classifier gives error allennlp.common.checks.ConfigurationError: 'discourse_classifier is not a registered name for Predictor'
  3. As long as I was able to detect the previous problems quite quickly and fixing them was trivial, I stumbled across a problem I couldn't overcome: The commands for starting the web server do not start it for me. I get
(p37) detecting-scientific-claim$ flask run --host=0.0.0.0
 * Serving Flask app "main.py"
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
  DeprecationWarning)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 355287325/355287325 [01:43<00:00, 3420728.39B/s]
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:54: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  "num_layers={}".format(dropout, num_layers))
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 381931118/381931118 [01:36<00:00, 3946648.30B/s]
 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

or

(p37) student@gpunode2:~/byczynskaa/detecting-scientific-claim$ bash web_service.sh 
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/sklearn/utils/linear_assignment_.py:21: DeprecationWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.
  DeprecationWarning)
/home/student/anaconda/envs/p37/lib/python3.7/site-packages/torch/nn/modules/rnn.py:54: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  "num_layers={}".format(dropout, num_layers))
Model loaded, serving demo on port 8000

But when I go the the localhost:port, I get This site can’t be reached :(

  1. Also, since I downloaded the latest allennlp the flags might have changed, and the evaluate command should be of the form allennlp evaluate model.tar.gz https://s3-us-west-2.amazonaws.com/pubmed-rct/test.json --include-package discourse

Could you help me fix the issue 3.?
Thanks!

@titipata
Copy link
Owner

titipata commented Jun 6, 2019

@olabknbit thanks so much!!

  1. , 2. Thanks so much, I will update the repo accordingly! (soon)
  2. So for the flask code, can you try to see if http://localhost:5000 is reachable? I will check tomorrow/next week to see if I can reproduce the issue.
  3. Yeah, I probably have to check the specific version of allennlp, the one that I trained was 0.7.1.

@olabknbit
Copy link
Author

yeah, yeah, I tried both ports, accessing http://localhost:5000 and http://localhost:8000 and also using the --port flag to change it to some other port but it always says the same thing. Thanks!

@titipata
Copy link
Owner

Hi @olabknbit I can successfully run the Flask script on my laptop and the Ubuntu laptop. My Flask version is 0.12.4 and my allennlp version is 0.7.1. Maybe you can try running simple Flask script such as in http://flask.pocoo.org/ first to see if it works? I will try to see if I can reproduce an error soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants