Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: invalid api token #3328

Open
adithep opened this issue Oct 7, 2024 · 3 comments
Open

[Bug]: invalid api token #3328

adithep opened this issue Oct 7, 2024 · 3 comments
Labels
area: backend Related to backend functionality or under the /backend directory bug Something isn't working

Comments

@adithep
Copy link

adithep commented Oct 7, 2024

What happened?

Once trying to chat with newly create brain got:

UnauthorizedError: status_code: 401, body: {'message': 'invalid api token'}

This bug only happens when trying to chat with a brain with document.

Chatting directly with the model is fine. So, likely not a problem with open ai api key?

Relevant log output

╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
backend-api  | OperationalError: server closed the connection unexpectedly
backend-api  |         This probably means the server terminated abnormally
backend-api  |         before or while processing the request.
backend-api  |
backend-api  |
backend-api  | 2024-10-07T07:45:39.920715Z [error    ] [sqlalchemy.pool.impl.AsyncAdaptedQueuePool] msg=The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xffff07afd3f0>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  | /usr/local/lib/python3.11/site-packages/rich/text.py:1138: SAWarning: The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xffff07afd3f0>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  |   _line_appends = [line._spans.append for line in new_lines._lines]
backend-api  | 2024-10-07T07:45:39.921032Z [error    ] [sqlalchemy.pool.impl.QueuePool] msg=Exception during reset or similar
backend-api  | ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
backend-api  | │ /usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py:986 in _finalize_fairy           │
backend-api  | │                                                                                                  │
backend-api  | │    983 │   │   │   │   )                                                                         │
backend-api  | │    984 │   │   │   assert fairy.dbapi_connection is dbapi_connection                             │
backend-api  | │    985 │   │   │                                                                                 │
backend-api  | │ ❱  986 │   │   │   fairy._reset(                                                                 │
backend-api  | │    987 │   │   │   │   pool,                                                                     │
backend-api  | │    988 │   │   │   │   transaction_was_reset=transaction_was_reset,                              │
backend-api  | │    989 │   │   │   │   terminate_only=detach,                                                    │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/sqlalchemy/pool/base.py:1432 in _reset                   │
backend-api  | │                                                                                                  │
backend-api  | │   1429 │   │   │   │   │   │   "Connection %s rollback-on-return",                               │
backend-api  | │   1430 │   │   │   │   │   │   self.dbapi_connection,                                            │
backend-api  | │   1431 │   │   │   │   │   )                                                                     │
backend-api  | │ ❱ 1432 │   │   │   │   pool._dialect.do_rollback(self)                                           │
backend-api  | │   1433 │   │   elif pool._reset_on_return is reset_commit:                                       │
backend-api  | │   1434 │   │   │   if self._echo:                                                                │
backend-api  | │   1435 │   │   │   │   pool.logger.debug(                                                        │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py:699 in do_rollback          │
backend-api  | │                                                                                                  │
backend-api  | │    696 │   │   pass                                                                              │
backend-api  | │    697 │                                                                                         │
backend-api  | │    698 │   def do_rollback(self, dbapi_connection):                                              │
backend-api  | │ ❱  699 │   │   dbapi_connection.rollback()                                                       │
backend-api  | │    700 │                                                                                         │
backend-api  | │    701 │   def do_commit(self, dbapi_connection):                                                │
backend-api  | │    702 │   │   dbapi_connection.commit()                                                         │
backend-api  | ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
backend-api  | OperationalError: server closed the connection unexpectedly
backend-api  |         This probably means the server terminated abnormally
backend-api  |         before or while processing the request.
backend-api  |
backend-api  |
backend-api  | 2024-10-07T07:45:39.992070Z [error    ] [sqlalchemy.pool.impl.AsyncAdaptedQueuePool] msg=The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xffff0533a020>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  | /usr/local/lib/python3.11/site-packages/rich/text.py:1138: SAWarning: The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xffff0533a020>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  |   _line_appends = [line._spans.append for line in new_lines._lines]
backend-api  | 2024-10-07T07:45:39.992308Z [error    ] [sqlalchemy.pool.impl.AsyncAdaptedQueuePool] msg=The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xfffefdb38f40>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  | /usr/local/lib/python3.11/site-packages/rich/text.py:1138: SAWarning: The garbage collector is trying to clean up non-checked-in connection <AdaptedConnection <asyncpg.connection.Connection object at 0xfffefdb38f40>>, which will be terminated.  Please ensure that SQLAlchemy pooled connections are returned to the pool explicitly, either by calling ``close()`` or by using appropriate context managers to manage their lifecycle.
backend-api  |   _line_appends = [line._spans.append for line in new_lines._lines]
backend-api  | 2024-10-07T07:45:39.285815Z [error    ] [asyncio] msg=Task exception was never retrieved
backend-api  | future: <Task finished name='Task-2226' coro=<<async_generator_athrow without __name__>()> exception=UnauthorizedError({'message': 'invalid api token'})>
backend-api  | ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:661 in              │
backend-api  | │ _astream_log_implementation                                                                      │
backend-api  | │                                                                                                  │
backend-api  | │   658 │   │   # Yield each chunk from the output stream                                          │
backend-api  | │   659 │   │   if diff:                                                                           │
backend-api  | │   660 │   │   │   async for log in stream:                                                       │
backend-api  | │ ❱ 661 │   │   │   │   yield log                                                                  │
backend-api  | │   662 │   │   else:                                                                              │
backend-api  | │   663 │   │   │   state = RunLog(state=None)  # type: ignore[arg-type]                           │
backend-api  | │   664 │   │   │   async for log in stream:                                                       │
backend-api  | ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
backend-api  | GeneratorExit
backend-api  |
backend-api  | During handling of the above exception, another exception occurred:
backend-api  |
backend-api  | ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:670 in              │
backend-api  | │ _astream_log_implementation                                                                      │
backend-api  | │                                                                                                  │
backend-api  | │   667 │   finally:                                                                               │
backend-api  | │   668 │   │   # Wait for the runnable to finish, if not cancelled (eg. by break)                 │
backend-api  | │   669 │   │   try:                                                                               │
backend-api  | │ ❱ 670 │   │   │   await task                                                                     │
backend-api  | │   671 │   │   except asyncio.CancelledError:                                                     │
backend-api  | │   672 │   │   │   pass                                                                           │
backend-api  | │   673                                                                                            │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:624 in              │
backend-api  | │ consume_astream                                                                                  │
backend-api  | │                                                                                                  │
backend-api  | │   621 │   │   │   prev_final_output: Optional[Output] = None                                     │
backend-api  | │   622 │   │   │   final_output: Optional[Output] = None                                          │
backend-api  | │   623 │   │   │                                                                                  │
backend-api  | │ ❱ 624 │   │   │   async for chunk in runnable.astream(input, config, **kwargs):                  │
backend-api  | │   625 │   │   │   │   prev_final_output = final_output                                           │
backend-api  | │   626 │   │   │   │   if final_output is None:                                                   │
backend-api  | │   627 │   │   │   │   │   final_output = chunk                                                   │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1411 in astream             │
backend-api  | │                                                                                                  │
backend-api  | │   1408 │   │   │   │   │   interrupt_after=interrupt_after,                                      │
backend-api  | │   1409 │   │   │   │   │   manager=run_manager,                                                  │
backend-api  | │   1410 │   │   │   │   ):                                                                        │
backend-api  | │ ❱ 1411 │   │   │   │   │   async for _ in runner.atick(                                          │
backend-api  | │   1412 │   │   │   │   │   │   loop.tasks.values(),                                              │
backend-api  | │   1413 │   │   │   │   │   │   timeout=self.step_timeout,                                        │
backend-api  | │   1414 │   │   │   │   │   │   retry_policy=self.retry_policy,                                   │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langgraph/pregel/runner.py:145 in atick                  │
backend-api  | │                                                                                                  │
backend-api  | │   142 │   │   │   # give control back to the caller                                              │
backend-api  | │   143 │   │   │   yield                                                                          │
backend-api  | │   144 │   │   # panic on failure or timeout                                                      │
backend-api  | │ ❱ 145 │   │   _panic_or_proceed(all_futures, asyncio.TimeoutError)                               │
backend-api  | │   146                                                                                            │
backend-api  | │   147                                                                                            │
backend-api  | │   148 def _should_stop_others(                                                                   │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langgraph/pregel/runner.py:190 in _panic_or_proceed      │
backend-api  | │                                                                                                  │
backend-api  | │   187 │   │   │   while inflight:                                                                │
backend-api  | │   188 │   │   │   │   inflight.pop().cancel()                                                    │
backend-api  | │   189 │   │   │   # raise the exception                                                          │
backend-api  | │ ❱ 190 │   │   │   raise exc                                                                      │
backend-api  | │   191 │   if inflight:                                                                           │
backend-api  | │   192 │   │   # if we got here means we timed out                                                │
backend-api  | │   193 │   │   while inflight:                                                                    │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langgraph/pregel/retry.py:76 in arun_with_retry          │
backend-api  | │                                                                                                  │
backend-api  | │    73 │   │   │   task.writes.clear()                                                            │
backend-api  | │    74 │   │   │   # run the task                                                                 │
backend-api  | │    75 │   │   │   if stream:                                                                     │
backend-api  | │ ❱  76 │   │   │   │   async for _ in task.proc.astream(task.input, task.config):                 │
backend-api  | │    77 │   │   │   │   │   pass                                                                   │
backend-api  | │    78 │   │   │   else:                                                                          │
backend-api  | │    79 │   │   │   │   await task.proc.ainvoke(task.input, task.config)                           │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langgraph/utils/runnable.py:504 in astream               │
backend-api  | │                                                                                                  │
backend-api  | │   501 │   │   │   │   │   )                                                                      │
backend-api  | │   502 │   │   │   │   output: Output = None                                                      │
backend-api  | │   503 │   │   │   │   add_supported = False                                                      │
backend-api  | │ ❱ 504 │   │   │   │   async for chunk in aiterator:                                              │
backend-api  | │   505 │   │   │   │   │   yield chunk                                                            │
backend-api  | │   506 │   │   │   │   │   # collect final output                                                 │
backend-api  | │   507 │   │   │   │   │   if add_supported:                                                      │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:258 in              │
backend-api  | │ tap_output_aiter                                                                                 │
backend-api  | │                                                                                                  │
backend-api  | │   255 │   │   Yields:                                                                            │
backend-api  | │   256 │   │   │   T: The output value.                                                           │
backend-api  | │   257 │   │   """                                                                                │
backend-api  | │ ❱ 258 │   │   async for chunk in output:                                                         │
backend-api  | │   259 │   │   │   # root run is handled in .astream_log()                                        │
backend-api  | │   260 │   │   │   if run_id != self.root_id:                                                     │
backend-api  | │   261 │   │   │   │   # if we can't find the run silently ignore                                 │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1314 in atransform      │
backend-api  | │                                                                                                  │
backend-api  | │   1311 │   │   final: Input                                                                      │
backend-api  | │   1312 │   │   got_first_val = False                                                             │
backend-api  | │   1313 │   │                                                                                     │
backend-api  | │ ❱ 1314 │   │   async for ichunk in input:                                                        │
backend-api  | │   1315 │   │   │   # The default implementation of transform is to buffer input and              │
backend-api  | │   1316 │   │   │   # then call stream.                                                           │
backend-api  | │   1317 │   │   │   # It'll attempt to gather all input into a single chunk using                 │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:875 in astream          │
backend-api  | │                                                                                                  │
backend-api  | │    872 │   │   Yields:                                                                           │
backend-api  | │ run_in_executor                                                                                  │
backend-api  | │                                                                                                  │
backend-api  | │   587 │                                                                                          │
backend-api  | │   588 │   if executor_or_config is None or isinstance(executor_or_config, dict):                 │
backend-api  | │   589 │   │   # Use default executor with context copied from current context                    │
backend-api  | │ ❱ 590 │   │   return await asyncio.get_running_loop().run_in_executor(                           │
backend-api  | │   591 │   │   │   None,                                                                          │
backend-api  | │   592 │   │   │   cast(Callable[..., T], partial(copy_context().run, wrapper)),                  │
backend-api  | │   593 │   │   )                                                                                  │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/concurrent/futures/thread.py:58 in run                                 │
backend-api  | │                                                                                                  │
backend-api  | │    55 │   │   │   return                                                                         │
backend-api  | │    56 │   │                                                                                      │
backend-api  | │    57 │   │   try:                                                                               │
backend-api  | │ ❱  58 │   │   │   result = self.fn(*self.args, **self.kwargs)                                    │
backend-api  | │    59 │   │   except BaseException as exc:                                                       │
backend-api  | │    60 │   │   │   self.future.set_exception(exc)                                                 │
backend-api  | │    61 │   │   │   # Break a reference cycle with the exception 'exc'                             │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py:581 in wrapper        │
backend-api  | │                                                                                                  │
backend-api  | │   578 │                                                                                          │
backend-api  | │   579 │   def wrapper() -> T:                                                                    │
backend-api  | │   580 │   │   try:                                                                               │
backend-api  | │ ❱ 581 │   │   │   return func(*args, **kwargs)                                                   │
backend-api  | │   582 │   │   except StopIteration as exc:                                                       │
backend-api  | │   583 │   │   │   # StopIteration can't be set on an asyncio.Future                              │
backend-api  | │   584 │   │   │   # it raises a TypeError and leaves the Future pending forever                  │
backend-api  | │                                                                                                  │
backend-api  | │ /app/core/quivr_core/quivr_rag_langgraph.py:199 in retrieve                                      │
backend-api  | │                                                                                                  │
backend-api  | │   196 │   │   │   dict: The retrieved chunks                                                     │
backend-api  | │   197 │   │   """                                                                                │
backend-api  | │   198 │   │   question = state["messages"][-1].content                                           │
backend-api  | │ ❱ 199 │   │   docs = self.compression_retriever.invoke(question)                                 │
backend-api  | │   200 │   │   return {"docs": docs}                                                              │
backend-api  | │   201 │                                                                                          │
backend-api  | │   202 │   def generate_rag(self, state):                                                         │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py:252 in invoke               │
backend-api  | │                                                                                                  │
backend-api  | │   249 │   │   │   │   result = self._get_relevant_documents(input, **_kwargs)                    │
backend-api  | │   250 │   │   except Exception as e:                                                             │
backend-api  | │   251 │   │   │   run_manager.on_retriever_error(e)                                              │
backend-api  | │ ❱ 252 │   │   │   raise e                                                                        │
backend-api  | │   253 │   │   else:                                                                              │
backend-api  | │   254 │   │   │   run_manager.on_retriever_end(                                                  │
backend-api  | │   255 │   │   │   │   result,                                                                    │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_core/retrievers.py:245 in invoke               │
backend-api  | │                                                                                                  │
backend-api  | │   242 │   │   try:                                                                               │
backend-api  | │   243 │   │   │   _kwargs = kwargs if self._expects_other_args else {}                           │
backend-api  | │   244 │   │   │   if self._new_arg_supported:                                                    │
backend-api  | │ ❱ 245 │   │   │   │   result = self._get_relevant_documents(                                     │
backend-api  | │   246 │   │   │   │   │   input, run_manager=run_manager, **_kwargs                              │
backend-api  | │   247 │   │   │   │   )                                                                          │
backend-api  | │   248 │   │   │   else:                                                                          │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain/retrievers/contextual_compression.py:46 in     │
backend-api  | │ _get_relevant_documents                                                                          │
backend-api  | │                                                                                                  │
backend-api  | │   43 │   │   │   query, config={"callbacks": run_manager.get_child()}, **kwargs                  │
backend-api  | │   44 │   │   )                                                                                   │
backend-api  | │   45 │   │   if docs:                                                                            │
backend-api  | │ ❱ 46 │   │   │   compressed_docs = self.base_compressor.compress_documents(                      │
backend-api  | │   47 │   │   │   │   docs, query, callbacks=run_manager.get_child()                              │
backend-api  | │   48 │   │   │   )                                                                               │
backend-api  | │   49 │   │   │   return list(compressed_docs)                                                    │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_cohere/rerank.py:121 in compress_documents     │
backend-api  | │                                                                                                  │
backend-api  | │   118 │   │   │   A sequence of compressed documents.                                            │
backend-api  | │   119 │   │   """                                                                                │
backend-api  | │   120 │   │   compressed = []                                                                    │
backend-api  | │ ❱ 121 │   │   for res in self.rerank(documents, query):                                          │
backend-api  | │   122 │   │   │   doc = documents[res["index"]]                                                  │
backend-api  | │   123 │   │   │   doc_copy = Document(doc.page_content, metadata=deepcopy(doc.metadata))         │
backend-api  | │   124 │   │   │   doc_copy.metadata["relevance_score"] = res["relevance_score"]                  │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/langchain_cohere/rerank.py:88 in rerank                  │
backend-api  | │                                                                                                  │
backend-api  | │    85 │   │   ]                                                                                  │
backend-api  | │    86 │   │   model = model or self.model                                                        │
backend-api  | │    87 │   │   top_n = top_n if (top_n is None or top_n > 0) else self.top_n                      │
backend-api  | │ ❱  88 │   │   results = self.client.rerank(                                                      │
backend-api  | │    89 │   │   │   query=query,                                                                   │
backend-api  | │    90 │   │   │   documents=docs,                                                                │
backend-api  | │    91 │   │   │   model=model,                                                                   │
backend-api  | │                                                                                                  │
backend-api  | │ /usr/local/lib/python3.11/site-packages/cohere/base_client.py:1599 in rerank                     │
backend-api  | │                                                                                                  │
backend-api  | │   1596 │   │   │   │   │   typing.cast(typing.Any, construct_type(type_=typing.Any, object_=_re  │
backend-api  | │   1597 │   │   │   │   )                                                                         │
backend-api  | │   1598 │   │   │   if _response.status_code == 401:                                              │
backend-api  | │ ❱ 1599 │   │   │   │   raise UnauthorizedError(                                                  │
backend-api  | │   1600 │   │   │   │   │   typing.cast(typing.Any, construct_type(type_=typing.Any, object_=_re  │
backend-api  | │   1601 │   │   │   │   )                                                                         │
backend-api  | │   1602 │   │   │   if _response.status_code == 403:                                              │
backend-api  | ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
backend-api  | UnauthorizedError: status_code: 401, body: {'message': 'invalid api token'}
ba

Twitter / LinkedIn details

No response

@adithep adithep added the bug Something isn't working label Oct 7, 2024
Copy link

linear bot commented Oct 7, 2024

Copy link

dosubot bot commented Oct 7, 2024

Hey there, @adithep! I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together!

The "UnauthorizedError: status_code: 401, body: {'message': 'invalid api token'}" occurs when trying to chat with a brain with a document because the authorization process involves validating the user's access to the specific brain using the validate_brain_authorization function. This function checks if the user has the required roles (Viewer, Editor, Owner) for the brain. If the API token is invalid or the user does not have the necessary permissions, the authorization fails, resulting in a 401 error.

In contrast, when chatting directly with the model, this specific authorization step related to the brain is bypassed, allowing the interaction to proceed without triggering the error [1].

To continue talking to Dosu, mention @dosu.

@dosubot dosubot bot added the area: backend Related to backend functionality or under the /backend directory label Oct 7, 2024
@adithep
Copy link
Author

adithep commented Oct 7, 2024

I am using admin user that created the brain. So I do not think that could be the problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: backend Related to backend functionality or under the /backend directory bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant