Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example randomly fail in gradio - weather service. #350

Open
B1tMaster opened this issue Dec 30, 2024 · 2 comments
Open

example randomly fail in gradio - weather service. #350

B1tMaster opened this issue Dec 30, 2024 · 2 comments
Assignees
Milestone

Comments

@B1tMaster
Copy link

  1. I get random error when I ran the example.. I started the weather example, but it seem to be running insurance agent anyway. this could be bug one

  2. Random error: it succeeds 60% of the time and fails the other 40% of the time with these log lines:
    NFO: 127.0.0.1:38600 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 172.18.0.1:60232 - "POST /policy/qa HTTP/1.1" 404 Not Found
    chatbot_ui-1 | 2024-12-30 09:36:50,957 - INFO - HTTP Request: POST http://host.docker.internal:10000/v1/chat/completions "HTTP/1.1 404 Not Found"
    chatbot_ui-1 | 2024-12-30 09:36:50,957 - INFO - Error calling gateway API: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}
    chatbot_ui-1 | Traceback (most recent call last):
    chatbot_ui-1 | File "/app/run_stream.py", line 61, in chat
    chatbot_ui-1 | response = client.chat.completions.create(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_utils/_utils.py", line 274, in wrapper
    chatbot_ui-1 | return func(*args, **kwargs)
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 742, in create
    chatbot_ui-1 | return self._post(
    chatbot_ui-1 | ^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1270, in post
    chatbot_ui-1 | return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 947, in request
    chatbot_ui-1 | return self._request(
    chatbot_ui-1 | ^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1051, in _request
    chatbot_ui-1 | raise self._make_status_error_from_response(err.response) from None
    chatbot_ui-1 | openai.NotFoundError: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}
    chatbot_ui-1 |
    chatbot_ui-1 | During handling of the above exception, another exception occurred:
    chatbot_ui-1 |
    chatbot_ui-1 | Traceback (most recent call last):
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/queueing.py", line 624, in process_events
    chatbot_ui-1 | response = await route_utils.call_process_api(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/route_utils.py", line 323, in call_process_api
    chatbot_ui-1 | output = await app.get_blocks().process_api(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/blocks.py", line 2018, in process_api
    chatbot_ui-1 | result = await self.call_function(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/blocks.py", line 1579, in call_function
    chatbot_ui-1 | prediction = await utils.async_iteration(iterator)
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 691, in async_iteration
    chatbot_ui-1 | return await anext(iterator)
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 685, in anext
    chatbot_ui-1 | return await anyio.to_thread.run_sync(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
    chatbot_ui-1 | return await get_async_backend().run_sync_in_worker_thread(
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
    chatbot_ui-1 | return await future
    chatbot_ui-1 | ^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
    chatbot_ui-1 | result = context.run(func, *args)
    chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 668, in run_sync_iterator_async
    chatbot_ui-1 | return next(iterator)
    chatbot_ui-1 | ^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 829, in gen_wrapper
    chatbot_ui-1 | response = next(iterator)
    chatbot_ui-1 | ^^^^^^^^^^^^^^
    chatbot_ui-1 | File "/app/run_stream.py", line 72, in chat
    chatbot_ui-1 | raise gr.Error("Error calling gateway API: {}".format(e))
    chatbot_ui-1 | gradio.exceptions.Error: 'Error calling gateway API: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}'
    weather_forecast_service-1 | INFO: 127.0.0.1:38614 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:53028 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:53032 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:48428 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:48430 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:60944 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:60952 - "GET /healthz HTTP/1.1" 200 OK
    weather_forecast_service-1 | INFO: 127.0.0.1:58234 - "GET /healthz HTTP/1.1" 200 OK

Here is what I see on gradio GUI:

Screenshot 2024-12-30 at 5 36 55 PM
@B1tMaster
Copy link
Author

here is a proof that I indeed started a weather demo, but agent was taking about insurance:

hatbot_ui-1 | File "/app/run_stream.py", line 72, in chat
chatbot_ui-1 | raise gr.Error("Error calling gateway API: {}".format(e))
chatbot_ui-1 | gradio.exceptions.Error: 'Error calling gateway API: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}'
weather_forecast_service-1 | INFO: 127.0.0.1:46774 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:56582 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:56588 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48718 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48728 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:47858 - "GET /healthz HTTP/1.1" 200 OK
chatbot_ui-1 | 2024-12-30 09:36:21,738 - INFO - HTTP Request: POST http://host.docker.internal:10000/v1/chat/completions "HTTP/1.1 200 OK"
weather_forecast_service-1 | INFO: 127.0.0.1:47860 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:46536 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:46546 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:37538 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:37554 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:38600 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 172.18.0.1:60232 - "POST /policy/qa HTTP/1.1" 404 Not Found
chatbot_ui-1 | 2024-12-30 09:36:50,957 - INFO - HTTP Request: POST http://host.docker.internal:10000/v1/chat/completions "HTTP/1.1 404 Not Found"
chatbot_ui-1 | 2024-12-30 09:36:50,957 - INFO - Error calling gateway API: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}
chatbot_ui-1 | Traceback (most recent call last):
chatbot_ui-1 | File "/app/run_stream.py", line 61, in chat
chatbot_ui-1 | response = client.chat.completions.create(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_utils/_utils.py", line 274, in wrapper
chatbot_ui-1 | return func(*args, **kwargs)
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 742, in create
chatbot_ui-1 | return self._post(
chatbot_ui-1 | ^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1270, in post
chatbot_ui-1 | return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 947, in request
chatbot_ui-1 | return self._request(
chatbot_ui-1 | ^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/openai/_base_client.py", line 1051, in _request
chatbot_ui-1 | raise self._make_status_error_from_response(err.response) from None
chatbot_ui-1 | openai.NotFoundError: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}
chatbot_ui-1 |
chatbot_ui-1 | During handling of the above exception, another exception occurred:
chatbot_ui-1 |
chatbot_ui-1 | Traceback (most recent call last):
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/queueing.py", line 624, in process_events
chatbot_ui-1 | response = await route_utils.call_process_api(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/route_utils.py", line 323, in call_process_api
chatbot_ui-1 | output = await app.get_blocks().process_api(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/blocks.py", line 2018, in process_api
chatbot_ui-1 | result = await self.call_function(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/blocks.py", line 1579, in call_function
chatbot_ui-1 | prediction = await utils.async_iteration(iterator)
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 691, in async_iteration
chatbot_ui-1 | return await anext(iterator)
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 685, in anext
chatbot_ui-1 | return await anyio.to_thread.run_sync(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
chatbot_ui-1 | return await get_async_backend().run_sync_in_worker_thread(
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
chatbot_ui-1 | return await future
chatbot_ui-1 | ^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
chatbot_ui-1 | result = context.run(func, *args)
chatbot_ui-1 | ^^^^^^^^^^^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 668, in run_sync_iterator_async
chatbot_ui-1 | return next(iterator)
chatbot_ui-1 | ^^^^^^^^^^^^^^
chatbot_ui-1 | File "/usr/local/lib/python3.12/site-packages/gradio/utils.py", line 829, in gen_wrapper
chatbot_ui-1 | response = next(iterator)
chatbot_ui-1 | ^^^^^^^^^^^^^^
chatbot_ui-1 | File "/app/run_stream.py", line 72, in chat
chatbot_ui-1 | raise gr.Error("Error calling gateway API: {}".format(e))
chatbot_ui-1 | gradio.exceptions.Error: 'Error calling gateway API: upstream application error host=arch_internal, path=/function_calling, status=404, body={"detail":"Not Found"}'
weather_forecast_service-1 | INFO: 127.0.0.1:38614 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:53028 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:53032 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48428 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48430 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:60944 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:60952 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:58234 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:58238 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:41166 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:41180 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:44300 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:44316 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:49762 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:49770 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:58790 - "GET /healthz HTTP/1.1" 200 OK
grafana-1 | logger=cleanup t=2024-12-30T09:38:10.154643919Z level=info msg="Completed cleanup jobs" duration=44.456292ms
grafana-1 | logger=plugins.update.checker t=2024-12-30T09:38:10.585679128Z level=info msg="Update check succeeded" duration=242.276ms

weather_forecast_service-1 | INFO: 127.0.0.1:58806 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:56422 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:56428 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:41828 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:41838 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40194 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40200 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:33746 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:33750 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:50898 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:50904 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:42174 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:42178 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48536 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48552 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:54196 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:54210 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40314 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40330 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:45714 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:45724 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:57612 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:57628 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:59226 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:59242 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:54070 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:54076 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:51148 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:51164 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:52984 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:52988 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:36620 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:36634 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48672 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:48680 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:44364 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:39688 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:39700 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:42066 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:42082 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40482 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:40494 - "GET /healthz HTTP/1.1" 200 OK

weather_forecast_service-1 | INFO: 127.0.0.1:45436 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:45446 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:53974 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:53988 - "GET /healthz HTTP/1.1" 200 OK
weather_forecast_service-1 | INFO: 127.0.0.1:57224 - "GET /healthz HTTP/1.1" 200 OK
jaeger-1 | {"level":"info","ts":1735551726.389733,"caller":"flags/service.go:128","msg":"Shutting down"}
jaeger-1 | {"level":"info","ts":1735551726.3897798,"caller":"healthcheck/handler.go:118","msg":"Health Check state change","status":"unavailable"}
jaeger-1 | {"level":"info","ts":1735551726.3897905,"caller":"app/agent.go:82","msg":"shutting down agent's HTTP server","addr":"[::]:5778"}
grafana-1 | logger=server t=2024-12-30T09:42:06.390581959Z level=info msg="Shutdown started" reason="System signal: terminated"
jaeger-1 | {"level":"info","ts":1735551726.3898425,"caller":"app/agent.go:65","msg":"agent's http server exiting"}
grafana-1 | logger=tracing t=2024-12-30T09:42:06.390666459Z level=info msg="Closing tracing"
jaeger-1 | {"level":"info","ts":1735551726.3899891,"caller":"[email protected]/clientconn.go:544","msg":"[core] [Channel #4]Channel Connectivity change to SHUTDOWN"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=WARN source=main.go:1023 msg="Received an OS signal, exiting gracefully..." signal=terminated
grafana-1 | logger=ticker t=2024-12-30T09:42:06.390756543Z level=info msg=stopped last_tick=2024-12-30T09:42:00Z
jaeger-1 | {"level":"info","ts":1735551726.3900037,"caller":"[email protected]/resolver_wrapper.go:100","msg":"[core] [Channel #4]Closing the name resolver"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=main.go:1051 msg="Stopping scrape discovery manager..."
grafana-1 | logger=grafana-apiserver t=2024-12-30T09:42:06.390867501Z level=info msg="StorageObjectCountTracker pruner is exiting"
jaeger-1 | {"level":"info","ts":1735551726.3900118,"caller":"[email protected]/balancer_wrapper.go:145","msg":"[core] [Channel #4]ccBalancerWrapper: closing"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=main.go:1065 msg="Stopping notify discovery manager..."
jaeger-1 | {"level":"info","ts":1735551726.3900228,"caller":"grpc/builder.go:120","msg":"Agent collector connection state change","dialTarget":"localhost:14250","status":"SHUTDOWN"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=manager.go:182 msg="Stopping rule manager..." component="rule manager"
jaeger-1 | {"level":"info","ts":1735551726.3900316,"caller":"[email protected]/clientconn.go:1148","msg":"[core] [Channel #4]Channel deleted"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=manager.go:192 msg="Rule manager stopped" component="rule manager"
jaeger-1 | {"level":"info","ts":1735551726.3900526,"caller":"[email protected]/server.go:816","msg":"[core] [Server #2 ListenSocket #5]ListenSocket deleted"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=main.go:1102 msg="Stopping scrape manager..."
jaeger-1 | {"level":"info","ts":1735551726.3900685,"caller":"healthcheck/handler.go:118","msg":"Health Check state change","status":"unavailable"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=main.go:1061 msg="Notify discovery manager stopped"
jaeger-1 | {"level":"info","ts":1735551726.390079,"caller":"handler/otlp_receiver.go:70","msg":"OTLP receiver status change","status":"StatusStopping"}
prometheus-1 | time=2024-12-30T09:42:06.391Z level=INFO source=main.go:1047 msg="Scrape discovery manager stopped"
jaeger-1 | {"level":"info","ts":1735551726.390098,"caller":"[email protected]/server.go:816","msg":"[core] [Server #3 ListenSocket #6]ListenSocket deleted"}
prometheus-1 | time=2024-12-30T09:42:06.393Z level=INFO source=main.go:1094 msg="Scrape manager stopped"
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=notifier.go:702 msg="Stopping notification manager..." component=notifier
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=notifier.go:409 msg="Draining any remaining notifications..." component=notifier
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=notifier.go:415 msg="Remaining notifications drained" component=notifier
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=notifier.go:345 msg="Notification manager stopped" component=notifier
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=main.go:1372 msg="Notifier manager stopped"
prometheus-1 | time=2024-12-30T09:42:06.395Z level=INFO source=main.go:1384 msg="See you next time!"
weather_forecast_service-1 | INFO: Shutting down
weather_forecast_service-1 | INFO: Waiting for application shutdown.
weather_forecast_service-1 | INFO: Application shutdown complete.
weather_forecast_service-1 | INFO: Finished server process [1]
jaeger-1 | {"level":"info","ts":1735551726.5379422,"caller":"transport/controlbuf.go:580","msg":"[transport] [server-transport 0x4000152000] loopyWriter exiting with error: second GOAWAY written and no active streams left to process"}
jaeger-1 | {"level":"info","ts":1735551726.5379713,"caller":"transport/http2_server.go:666","msg":"[transport] [server-transport 0x4000152000] Closing: EOF"}
jaeger-1 | {"level":"info","ts":1735551726.5382097,"caller":"handler/otlp_receiver.go:70","msg":"OTLP receiver status change","status":"StatusStopped"}
jaeger-1 | {"level":"info","ts":1735551726.5382867,"caller":"app/server.go:352","msg":"Closing HTTP server"}
jaeger-1 | {"level":"info","ts":1735551726.53832,"caller":"app/server.go:302","msg":"HTTP server stopped","port":16686,"addr":":16686"}
jaeger-1 | {"level":"info","ts":1735551726.538338,"caller":"app/server.go:357","msg":"Stopping gRPC server"}
jaeger-1 | {"level":"info","ts":1735551726.5383554,"caller":"[email protected]/server.go:816","msg":"[core] [Server #7 ListenSocket #8]ListenSocket deleted"}
jaeger-1 | {"level":"info","ts":1735551726.5383637,"caller":"app/server.go:317","msg":"GRPC server stopped","port":16685,"addr":":16685"}
jaeger-1 | {"level":"info","ts":1735551726.5383675,"caller":"app/server.go:366","msg":"Server stopped"}
jaeger-1 | {"level":"info","ts":1735551726.538404,"caller":"[email protected]/clientconn.go:544","msg":"[core] [Channel #1]Channel Connectivity change to SHUTDOWN"}
jaeger-1 | {"level":"info","ts":1735551726.5385106,"caller":"[email protected]/resolver_wrapper.go:100","msg":"[core] [Channel #1]Closing the name resolver"}
jaeger-1 | {"level":"info","ts":1735551726.538563,"caller":"[email protected]/balancer_wrapper.go:145","msg":"[core] [Channel #1]ccBalancerWrapper: closing"}
jaeger-1 | {"level":"info","ts":1735551726.5385783,"caller":"[email protected]/clientconn.go:1148","msg":"[core] [Channel #1]Channel deleted"}
jaeger-1 | {"level":"info","ts":1735551726.5386767,"caller":"flags/service.go:136","msg":"Shutdown complete"}

@salmanap
Copy link
Contributor

salmanap commented Dec 30, 2024

@B1tMaster thanks for reporting this issue. From what you shared, it does look like you are running the weather demo app, and there are a couple of things happening here from what I can see.

  1. First, we aren't correctly handling errors from arch_function when the intent can't be matched. We'll keep this issue open to investigate more deeply and make sure we are handling these scenarios gracefully. @adilhafeez please have a look. @B1tMaster we'll probably need docker logs for archgw to trace the request flow. You can attach them here, or if you feel more comfortable you can use our discord channel to share more.

  2. Second, what seems to be happening is that your insurance related prompts (to the weather forecast demo) are landing on the https://docs.archgw.com/concepts/prompt_target.html#default-targets - which means that archgw couldn't match to any specific prompt_target so it forwards the user prompt (and history) to the default prompt target. The default prompt target essentially is a "catch all" that gives developers choice on how to handle prompts outside the bounds of the targets configured. In this scenario, the JSON returned from the default target and the last user prompt is sent to an LLM to summarize. This is why you see the weather agent responding about insurance. You can change this behavior several ways. Including setting auto_llm_dispatch_on_response to false and updating the system prompt to only handle weather related prompts

image

To make sure that the demo sticks to the demo and unrelated prompts are handled gracefully, i've opened a a separate issue:

@adilhafeez adilhafeez added this to the 0.1.8 milestone Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants