Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

07.1 Text Classification & 07.2 Emoji Suggestions -- ModuleNotFoundError: No module named 'nb_utils' #70

Open
mikechen66 opened this issue May 4, 2020 · 3 comments

Comments

@mikechen66
Copy link

mikechen66 commented May 4, 2020

Hi Douwe:

How to deal with the issue? There are many applications have the module of nb_utils in Deep Learning Cookbook. Beside of 07.1, 07.2 Emoji Suggestions has the same issue. Would you please solve the issue?

import pandas as pd
from keras.utils.data_utils import get_file
import nb_utils

emotion_csv = get_file('text_emotion.csv',
'https://www.crowdflower.com/wp-content/uploads/2016/07/text_emotion.csv')
emotion_df = pd.read_csv(emotion_csv)

Using TensorFlow backend.


ModuleNotFoundError Traceback (most recent call last)
in
1 import pandas as pd
2 from keras.utils.data_utils import get_file
----> 3 import nb_utils
4
5 emotion_csv = get_file('text_emotion.csv',

ModuleNotFoundError: No module named 'nb_utils'

Best regards,

Mike

@mikechen66 mikechen66 changed the title 07.1 Text Classification -- ModuleNotFoundError: No module named 'nb_utils' 07.1 Text Classification &07.2 Emoji Suggestions -- ModuleNotFoundError: No module named 'nb_utils' May 4, 2020
@mikechen66 mikechen66 changed the title 07.1 Text Classification &07.2 Emoji Suggestions -- ModuleNotFoundError: No module named 'nb_utils' 07.1 Text Classification & 07.2 Emoji Suggestions -- ModuleNotFoundError: No module named 'nb_utils' May 4, 2020
@DOsinga
Copy link
Owner

DOsinga commented May 4, 2020

Thanks for flagging. I'll have a look!

@mikechen66
Copy link
Author

mikechen66 commented May 4, 2020

Hi Douwe:

I solve the issue after uploading nb_utils.py and change the directory name from deep_learning_cookbook-master to dl-cookbook.

Since the downloaded document is deep_learning_cookbook-master, but nb_utils.py includes the name of dl-cookbook. So the two names have a conflict.

Best regards,

Mike

@mikechen66
Copy link
Author

mikechen66 commented May 4, 2020

After solving the issue of nb_utils, a new issue emerges. I address the issue and its related solution.

Issue of Parsing:

emotion_csv = get_file('text_emotion.csv', 'https://raw.githubusercontent.com/johnvblazic/emotionDetectionDataset/master/text_emotion.csv')

Soluton:

I solve it with the downloaded text_emotion.csv. The download address is https://github.com/johnvblazic/emotionDetectionDataset. I am pleased to provide the related line of code as follows.

emotion_csv = '/home/user/Documents/dl-cookbook/data/text_emotion.csv'

#------------------------------------------------------------------------------------------------------------------------
the error message related to the Issue or Parsing:

ParserError Traceback (most recent call last)
in
5 emotion_csv = get_file('text_emotion.csv',
6 'https://www.crowdflower.com/wp-content/uploads/2016/07/text_emotion.csv')
----> 7 emotion_df = pd.read_csv(emotion_csv)

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)
674 )
675
--> 676 return _read(filepath_or_buffer, kwds)
677
678 parser_f.name = name

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
452
453 try:
--> 454 data = parser.read(nrows)
455 finally:
456 parser.close()

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in read(self, nrows)
1131 def read(self, nrows=None):
1132 nrows = _validate_integer("nrows", nrows)
-> 1133 ret = self._engine.read(nrows)
1134
1135 # May alter columns / col_dict

/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py in read(self, nrows)
2035 def read(self, nrows=None):
2036 try:
-> 2037 data = self._reader.read(nrows)
2038 except StopIteration:
2039 if self._first_chunk:

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows()

pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error()

ParserError: Error tokenizing data. C error: Expected 15 fields in line 17, saw 25

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants