You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are currently using the latest version (i.e. docker image tag d0e2505) and the deployment is configured using the manifests defined in this repo and noticed that the operator crashed without recovering automatically. The last few logs outputted by the operator were the following:
2022-11-01 05:42:07,399 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.io.EOFException null
2022-11-01 05:42:07,699 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.io.EOFException null
2022-11-01 05:42:08,402 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.net.ConnectException Failed to connect to /172.20.0.1:443
2022-11-01 05:42:08,700 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.net.ConnectException Failed to connect to /172.20.0.1:443
2022-11-01 05:42:10,404 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.net.ConnectException Failed to connect to /172.20.0.1:443
2022-11-01 05:42:10,702 WARN [io.fab.kub.cli.dsl.int.WatcherWebSocketListener] (OkHttp https://172.20.0.1/...) Exec Failure java.net.ConnectException Failed to connect to /172.20.0.1:443
Exception in thread "OkHttp Dispatcher" java.lang.NullPointerException
at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:684)
at io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener.onFailure(WatcherWebSocketListener.java:69)
at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:570)
at okhttp3.internal.ws.RealWebSocket$1.onResponse(RealWebSocket.java:199)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:174)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
We are wondering if there is a way how to make it more resilient without intervention since the operator needed to be restarted manually in order to recover and re-sync. The pod did not crash automatically although the health checks are configured.
The text was updated successfully, but these errors were encountered:
We are currently using the latest version (i.e. docker image tag
d0e2505
) and the deployment is configured using the manifests defined in this repo and noticed that the operator crashed without recovering automatically. The last few logs outputted by the operator were the following:We are wondering if there is a way how to make it more resilient without intervention since the operator needed to be restarted manually in order to recover and re-sync. The pod did not crash automatically although the health checks are configured.
The text was updated successfully, but these errors were encountered: