You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Application server running in Docker with active docker health check sometimes causes application server to restart (memory leaks in JVM). The restart is a regular application server shutdown and not a pure kill -9. In that case Ebean will be shut down and often throw a RejectionException during some commits that might still be executed on the server.
I am wondering if that is a race condition that might need to be fixed. I could imagine the code flow is like
commit()
db.shutdown()
notifyCommit(), which calls an already terminated scheduler causing the exception
Maybe the exception should be handled in Ebean, if it is expected to be thrown during shutdown.
ERROR i.e.s.transaction.TransactionManager NotifyOfCommit failed. L2 Cache potentially not notified.java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@1a6dea8d[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@370f053c[Wrapped task = io.ebeaninternal.server.executor.DefaultBackgroundExecutor$$Lambda$708/0x0000000801193800@62db3d40]] rejected from io.ebeaninternal.server.executor.DaemonScheduleThreadPool@7b14312[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 8564] at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2057) at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:827) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:340) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:562) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:715) at io.ebeaninternal.server.executor.DefaultBackgroundExecutor.submit(DefaultBackgroundExecutor.java:75) at io.ebeaninternal.server.executor.DefaultBackgroundExecutor.execute(DefaultBackgroundExecutor.java:80) at io.ebeaninternal.server.transaction.TransactionManager.notifyOfCommit(TransactionManager.java:452) at io.ebeaninternal.server.transaction.JdbcTransaction.notifyCommit(JdbcTransaction.java:934) at io.ebeaninternal.server.transaction.JdbcTransaction.postCommit(JdbcTransaction.java:1005) at io.ebeaninternal.server.transaction.JdbcTransaction.flushCommitAndNotify(JdbcTransaction.java:999) at io.ebeaninternal.server.transaction.JdbcTransaction.commit(JdbcTransaction.java:1058) at io.ebeaninternal.api.ScopeTrans.commitTransaction(ScopeTrans.java:140) at io.ebeaninternal.api.ScopedTransaction.commit(ScopedTransaction.java:110)
The text was updated successfully, but these errors were encountered:
Hmmm interesting. Ebean's database.shutdown() ought to wait for the Executor service to properly shutdown waiting for submitted requests to complete, with a max wait time. This stack trace looks more like a commit() occurring after shutdown is initiated.
Just for information, we get the same error randomly (rare) while sending a SIGTERM to our application in docker.
Ebean was doing this error while our requests were still in-flight and server was waiting for request to finish before shutting down.
Note that our application is using Play Ebean integration in the latest version available.
Application server running in Docker with active docker health check sometimes causes application server to restart (memory leaks in JVM). The restart is a regular application server shutdown and not a pure kill -9. In that case Ebean will be shut down and often throw a RejectionException during some commits that might still be executed on the server.
I am wondering if that is a race condition that might need to be fixed. I could imagine the code flow is like
Maybe the exception should be handled in Ebean, if it is expected to be thrown during shutdown.
The text was updated successfully, but these errors were encountered: