diff --git a/docs/docs/en/guide/task/flink.md b/docs/docs/en/guide/task/flink.md index b57606d4d16e..b30af03efccb 100644 --- a/docs/docs/en/guide/task/flink.md +++ b/docs/docs/en/guide/task/flink.md @@ -37,7 +37,7 @@ Flink task type, used to execute Flink programs. For Flink nodes: | Parallelism | Used to set the degree of parallelism for executing Flink tasks. | | Yarn queue | Used to set the yarn queue, use `default` queue by default. | | Main program parameters | Set the input parameters for the Flink program and support the substitution of custom parameter variables. | -| Optional parameters | Support `--jar`, `--files`,` --archives`, `--conf` format. | +| Optional parameters | Set the flink command options, such as `-D`, `-C`, `-yt`. | | Custom parameter | It is a local user-defined parameter for Flink, and will replace the content with `${variable}` in the script. | ## Task Example diff --git a/docs/docs/en/guide/task/spark.md b/docs/docs/en/guide/task/spark.md index 88e6c619435e..3e0f83b253cf 100644 --- a/docs/docs/en/guide/task/spark.md +++ b/docs/docs/en/guide/task/spark.md @@ -35,7 +35,7 @@ Spark task type for executing Spark application. When executing the Spark task, | Executor memory size | Set the size of Executor memories, which can be set according to the actual production environment. | | Yarn queue | Set the yarn queue, use `default` queue by default. | | Main program parameters | Set the input parameters of the Spark program and support the substitution of custom parameter variables. | -| Optional parameters | Support `--jars`, `--files`,` --archives`, `--conf` format. | +| Optional parameters | Set the spark command options, such as `--jars`, `--files`,` --archives`, `--conf`. | | Resource | Appoint resource files in the `Resource` if parameters refer to them. | | Custom parameter | It is a local user-defined parameter for Spark, and will replace the content with `${variable}` in the script. | | Predecessor task | Selecting a predecessor task for the current task, will set the selected predecessor task as upstream of the current task. | diff --git a/docs/docs/zh/guide/task/flink.md b/docs/docs/zh/guide/task/flink.md index b15078e1916a..478e07fae874 100644 --- a/docs/docs/zh/guide/task/flink.md +++ b/docs/docs/zh/guide/task/flink.md @@ -37,7 +37,7 @@ Flink 任务类型,用于执行 Flink 程序。对于 Flink 节点: | 并行度 | 用于设置执行 Flink 任务的并行度 | | Yarn 队列 | 用于设置 Yarn 队列,默认使用 default 队列 | | 主程序参数 | 设置 Flink 程序的输入参数,支持自定义参数变量的替换 | -| 选项参数 | 支持 `--jar`、`--files`、`--archives`、`--conf` 格式 | +| 选项参数 | 设置Flink命令的选项参数,例如`-D`, `-C`, `-yt` | | 自定义参数 | 是 Flink 局部的用户自定义参数,会替换脚本中以 ${变量} 的内容 | ## 任务样例 diff --git a/docs/docs/zh/guide/task/spark.md b/docs/docs/zh/guide/task/spark.md index 641b6dcbf942..a392f5582694 100644 --- a/docs/docs/zh/guide/task/spark.md +++ b/docs/docs/zh/guide/task/spark.md @@ -34,7 +34,7 @@ Spark 任务类型用于执行 Spark 应用。对于 Spark 节点,worker 支 - Executor 内存数:用于设置 Executor 内存数,可根据实际生产环境设置对应的内存数。 - Yarn 队列:用于设置 Yarn 队列,默认使用 default 队列。 - 主程序参数:设置 Spark 程序的输入参数,支持自定义参数变量的替换。 -- 选项参数:支持 `--jars`、`--files`、`--archives`、`--conf` 格式。 +- 选项参数:设置Spark命令的选项参数,例如`--jars`、`--files`、`--archives`、`--conf`。 - 资源:如果其他参数中引用了资源文件,需要在资源中选择指定。 - 自定义参数:是 Spark 局部的用户自定义参数,会替换脚本中以 ${变量} 的内容。 diff --git a/dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java b/dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java index 012cc8149a6f..b558d6cf3745 100644 --- a/dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java +++ b/dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java @@ -490,6 +490,16 @@ public void releaseTaskGroup(TaskInstance taskInstance) { } else { ProcessInstance processInstance = processService.findProcessInstanceById(nextTaskInstance.getProcessInstanceId()); + if (processInstance == null) { + log.error("WorkflowInstance is null cannot wakeup, processInstanceId:{}", + nextTaskInstance.getProcessInstanceId()); + return; + } + if (processInstance.getHost() == null || Constants.NULL.equals(processInstance.getHost())) { + log.warn("The next WorkflowInstance: {} host is null no need to wakeup, maybe it is in failover", + processInstance); + return; + } ILogicTaskInstanceOperator taskInstanceOperator = SingletonJdkDynamicRpcClientProxyFactory .getProxyClient(processInstance.getHost(), ILogicTaskInstanceOperator.class); taskInstanceOperator.wakeupTaskInstance(