You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some backend types may not be able to handle saving an unbounded number of actions to storage in a single go. In such cases, it's necessary to break a large batch of actions into smaller chunks to remain within certain limits of the store. For example, Azure Cosmos DB doesn't allow saving more than 100 documents at a time in a single batch transaction, and that transaction must not exceed 2 MB in size (source). This is an issue in the Dapr Workflow project, as noted here: dapr/dapr#6544.
Even for stores which support saving unbounded numbers of records in a single transaction, it may be desirable to break those transactions into smaller chunks. One reason could be that large transactions could occupy too many database resources. Another is that large transactions could take a long time, increase the chance of failures, and cause work to need to be redone more often. In degenerate cases, this could cause workflows to get stuck, continuously consume huge amounts of resources, and continuously schedule the same work over and over.
Rather than making each backend implementation do its own chunking, the durabletask-go engine should support this directly. Depending on configuration, the orchestration engine can submit multiple calls to the backend, one for each logical chunk. The configuration for this, for example, could include MaxNewHistoryEventCount and MaxNewHistoryEventBytes settings. When the payload of an orchestration result is close to exceeding either of these numbers, a call to Backend.CompleteOrchestrationWorkItem is called to save the current chunk. The engine will then continue building the next payload until a final call to Backend.CompleteOrchestrationWorkItem is made with the final set of updates.
The text was updated successfully, but these errors were encountered:
Some backend types may not be able to handle saving an unbounded number of actions to storage in a single go. In such cases, it's necessary to break a large batch of actions into smaller chunks to remain within certain limits of the store. For example, Azure Cosmos DB doesn't allow saving more than 100 documents at a time in a single batch transaction, and that transaction must not exceed 2 MB in size (source). This is an issue in the Dapr Workflow project, as noted here: dapr/dapr#6544.
Even for stores which support saving unbounded numbers of records in a single transaction, it may be desirable to break those transactions into smaller chunks. One reason could be that large transactions could occupy too many database resources. Another is that large transactions could take a long time, increase the chance of failures, and cause work to need to be redone more often. In degenerate cases, this could cause workflows to get stuck, continuously consume huge amounts of resources, and continuously schedule the same work over and over.
Rather than making each backend implementation do its own chunking, the
durabletask-go
engine should support this directly. Depending on configuration, the orchestration engine can submit multiple calls to the backend, one for each logical chunk. The configuration for this, for example, could includeMaxNewHistoryEventCount
andMaxNewHistoryEventBytes
settings. When the payload of an orchestration result is close to exceeding either of these numbers, a call toBackend.CompleteOrchestrationWorkItem
is called to save the current chunk. The engine will then continue building the next payload until a final call toBackend.CompleteOrchestrationWorkItem
is made with the final set of updates.The text was updated successfully, but these errors were encountered: