Releases: infiniticio/infinitic
v0.16.1
v0.16.0
🚀 New Features
-
Batch processing of Tasks:
Enables efficient handling of operations that benefit from bulk processing, such as sending emails, updating databases, or calling external APIs. Tasks with the samebatchKey
in their metadata are processed together in a single batch. -
In-Memory implementation for testing:
The internal message exchange is now fully abstracted, paving the way for potential future support of alternative message brokers. This version introduces an in-memory implementation, ideal for testing purposes, allowing tests to run without the need for a Pulsar cluster. -
Enhanced Event Listener:
Infinitic introduces a more powerful way to monitor internal events. This feature can be used to trigger external actions or send events to analytics databases for dashboard creation. The event listener now automatically detects existing Services and Workflows, listening to their events. All events are now processed in batches for improved efficiency. -
Improved Development Logging:
To address the challenges of debugging distributed systems, Infinitic now offers a streamlined method for viewing CloudEvents during development. Simply set the log level to DEBUG for these classes:- io.infinitic.cloudEvents.WorkflowStateEngine.$workflowName
- io.infinitic.cloudEvents.WorkflowExecutor.$workflowName
- io.infinitic.cloudEvents.ServiceExecutor.$serviceName
This enhancement provides greater visibility into the system's internal workings, making it easier to identify and resolve issues.
🚨 Breaking Changes
- Worker configuration: A new page in the documentation is available to explain how to set workers. Please refer to it for more details on the following changes:
- State Engines and Tag Engines settings must now be explicitly defined. Implicit settings are no longer supported.
- Config builder setters have been standardized to use
set
+ variable name (e.g.,setMaxPoolSize
instead ofmaxPoolSize
). - Time-related settings now use consistent suffixes: "Seconds" replaces "InSeconds", "Minutes" replaces "InMinutes", etc.
- Static methods for Clients and Workers configuration now use "Yaml" prefix:
fromYamlResource
,fromYamlString
,fromYamlFile
. - Storage:
- Explicit storage configuration is now required. Default values for local development have been removed to prevent confusion in production environments.
- 'user' parameter renamed to 'username' in MySQLConfig, PostgresConfig, and RedisConfig.
- Pulsar:
- Pulsar configuration moved under the
transport
keyword. - Pulsar client config now under
client
keyword, with expanded settings. - Policies field names refactored for consistency.
delayedTTLInSeconds
renamed totimerTTLSeconds
in Pulsar Policies configuration.
- Pulsar configuration moved under the
- Dashboard:
- All settings must now be under the
dashboard
keyword.
- All settings must now be under the
ExponentialBackoffRetryPolicy
class renamed toWithExponentialBackoffRetry
.
- CloudEvents updates:
- Cloud-event listener are not anymore created under services and workflows
- Format changes for improved clarity and consistency.
- Sources now clearly differentiate between executor and stateEngine.
- Workflow version defaults to 0 when undefined (previously null).
- "start" command renamed to "dispatch".
- "ended" event renamed to "completed".
🔬 Improvements
- More reliable client deletion when topic is closing
- Improved implementation of consumers to ensure all messages are processed, even in case of errors or Shutdown
- Lib updates:
- Kotlin: 2.0.0 -> 2.0.10
- CloudEvents: 3.0.0 -> 4.0.1
- Jackson: 2.17.1 -> 2.17.2
- Uuid: 5.0.0 -> 5.1.0
- Kotest: 5.9.0 -> 5.9.1
- kotlinx-serialization-json: 1.6.3 -> 1.7.1
- TestContainers: 1.19.8 -> 1.20.1
- Mockk: 1.13.11 -> 1.13.12
- Pulsar: 3.0.4 -> 3.0.7
- Slf4j: 2.0.13 -> 2.0.16
- Logging: 6.0.9 -> 7.0.0
- Compress: 1.26.1 -> 1.27.1
🪲 Bug Fixes
- In workflows, if a property is present in the workflow history but disappeared from the workflow class, a warning is now emitted. Previously an error was thrown.
- When using dispatchAsync, multiple successive calls would use only the last arguments used
- Fix use of Schema for Postgres
v0.15.0
Please ensure to terminate all running workers prior to upgrading to version 0.15.0. This is crucial because prior versions earlier than 0.15.0, are unable to deserialize messages produced by the new version 0.15.0.
🚀 New Features
- Add support for JsonView
- Workers can now be created from a YAML String, through the
fromConfigYaml
static method - All configuration objects (
PulsarConfig
,MySQLConfig
,RedisConfig
,PostgresConfig
,CaffeineConfig
...) can now be manually created through builders. serviceDefault
,workflowDefault
,defaultStorage
can now be manually registered in Workers
🚨 Breaking Changes
-
Serialization:
- Both serialization of arguments and deserialization are now conducted in accordance with the types defined in the interfaces. This contrasts with our prior implementation, which performed these operations based on the actual type of objects involved.
- The revamped approach broadens applicability, aptly resolving the concerns cited in issue #80. Nonetheless, when faced with situations involving polymorphism, the responsibility now lies with the user to furnish deserializers with adequate information.
-
Worker: configuration file:
- The configuration parameters,
brokerServiceUrl
andwebServiceUrl
, must now be explicitly specified in PulsarConfig. Previously, the system would default to the settings of a local Pulsar cluster when these values were not provided. Despite its convenience in local development, this implicit default behavior could potentially lead to complications while deploying the project in a production environment. - In the configurations for
workflows
andworkflowDefault
, we have revised the property nameworkflowEngine
tostateEngine
. We believe this new terminology more accurately reflects the function of this property. - The configuration for
cache
now needs to be nested within thestorage
configuration. This update aligns logically with the intended usage, as the cache is exclusively utilized for storage-related functions. - The execution policy for tasks has been revised: by default, failed tasks will no longer be subject to automatic retries. To implement retry functionality, users are required to explicitly configure a retry policy. We believe this change will help alleviate potential confusion for new users, who may be perplexed by tasks appearing to fail after 10 minutes due to the previously implicit retry mechanism.
- Renaming all config files by putting a
Config
suffix on them.
- The configuration parameters,
-
Worker: manual registration of Service and Workflow:
registerService
method is nowregisterServiceExecutor
for consistencyregisterWorkflowExecutor
now uses a factory as parameter instead of a class name
-
Workflows behavior: the use of
Deferred
objects in workflow properties or arguments is now prohibited. Should a Deferred object appear in these contexts, an explicit exception will be thrown. This measure was put into place due to some issues identified in certain edge cases of the previous implementation. More importantly, allowing the transmission of Deferred objects to other workflows seems wrong.
🔬 Improvements
- Add compression info to logging at worker start
- Add log message for existing workflow with same customId tag
- Improve performance by adding a cache to Method and Class introspection that occurs at each task and workflow processing
- Add more test to ensure that backward compatibility
🪲 Bug Fixes
v0.14.1
🚀 New Features
- Users can now update the object mapper for fine-grained control of serialization.
- Fix #233 - Introduced
keySetTable
andkeyValueTable
settings for storage configuration, allowing custom names for storage tables.
🚨 Breaking Changes
- Renamed
Task.set
toTask.setContext
. workflowId
,workflowName
,methodId
,methodName
,tags
, andmeta
are now static properties of theWorkflow
class.
🔬 Improvements
- Enhanced the
and
operator forDeferred
.
🪲 Bug Fixes
- Fix #239
v0.14.0
🚀 New Features
- Added PostgreSQL support for storage (tested on PG 16).
- Expanded storage options for MySQL with the following
HikariConfig
properties:minimumIdle
,idleTimeout
,connectionTimeout
, andmaxLifetime
.
🚨 Breaking Changes
- In storage configuration, the
maxPoolSize
option has been replaced bymaximumPoolSize
to ensure consistency withHikariConfig
properties.
🔬 Improvements
-
To improve the detection of configuration issues, a warning is now emitted the first time a message is produced on a topic without a consumer.
-
Resources are now properly closed when worker initialization fails.
-
MySQL tests are now done on version 8.3
-
Updated the versions of the following dependencies:
- Avro (1.10.0 to 1.10.1)
v0.13.3
🔬 Improvements
-
Can use immutable metadata when dispatching tasks
-
Use Kotlin 2.0.0
-
bump libraries version
kotlinx-coroutines
from 1.8.0 to 1.8.1fasterxml.jackson
from 2.17.0 to 2.17.1kotest
from 5.8.1 to 5.9.0testcontainers
from 1.19.7 to 1.19.8bumpmockk
from 1.13.10 to 1.13.11avro4k
from 1.10.0 to 1.10.1slf4j
from 2.0.12 to 2.0.13kotlin-logging-jvm
from 6.03 to 6.0.9
-
target Java version 17
Full Changelog: v0.13.2...v0.13.3
v0.13.2
v0.13.1
🚨 Breaking changes
- The new default setting for cache is no cache
- Workflow and Service names are now escaped in topic's name - it's a breaking change only in the unlikely situation where you have special characters in those names
🔬 Improvements
- Pulsar version is now 3.0.4 (from 2.11.2)
- Workflow Tasks are processed on a key-shared subscription. This allows new workflow versions to be deployed continuously.
- Improve test coverage for tags
- Improve test coverage for infinitic-transport-pulsar module
- Client's topics are now deleted when clients are interrupted.
- Client's topics are not recreated by producers if already deleted
- Bump version of dependencies:
- CloudEvents (2.5.0 to 3.0.0)
- Jackson (2.15.3 to 2.17.0)
- java-uuid-generator (4.3.0 to 5.0.0)
- Kotest (5.8.0 to 5.8.1)
- TestContainers (1.19.5 to 1.19.7)
- Mockk (1.13.8 to 1.13.10).
- commons-compress (1.25.0 to 1.26.1)
🪲 Bug fixes
- Fix backward compatibility with 0.12.3 (in 0.13.0, some messages were wrongly discarded, leading to stuck workflows)
- Fix a bug introduced in 0.13.0 that led to the possible creation of multiple workflow instance with the same customId tag
- "none" cache setting now correctly means no cache, previously if was the default cache
Full Changelog: v0.13.0...v0.13.1
v0.13.0
DO NOT UPGRADE
This version includes a backward compatibility fixed in version 0.13.1.
We recommend upgrading directly to version 0.13.1 from 0.12.3 or below.
🚀 New features
-
CloudEvents (beta): Infinitic now expose its events in CloudEvents json format. This allow users to build their own dashboards, logs, or even add hooks to some specific events.
Examples of events exposed are:
- for methods of workflows:
startMethod
,cancelMethod
,methodCanceled
,methodcompleted
,methodFailed
,methodTimedOut
- for tasks within workflows:
taskDispatched
,taskCompleted
,taskFailed
,taskCanceled
,taskTimedOut
- for workflows within workflows:
remoteMethodDispatched
,remoteMethodCompleted
,remoteMethodFailed
,remoteMethodCanceled
,remoteMethodTimedOut
. - for the workflow executor itself (also called WorkflowTask)
executorDispatched
,executorCompleted
,executorFailed
Each event is accompanied by relevant data, such as the error details for a
taskFailed
event or the arguments for aremoteMethodDispatched
event.This feature is currently in beta and may be refined based on user feedback.
- for methods of workflows:
-
Delegated Tasks: In certain cases, tasks cannot be processed directly by a worker, and instead, the task invokes another system for processing, typically through an HTTP call. If the external system can process the task synchronously and return the output (or report a failure), the process works smoothly. However, if the external system cannot provide a synchronous response, the situation becomes ambiguous, leaving Infinitic without a clear indication of whether the task has been completed or failed, nor an ability to retrieve the result. Starting with version 0.13.0, Infinitic introduces a "delegated task" feature. This feature, enabled through an annotation on the task, informs Infinitic that the method's completion does not signify the task's completion and that it should await asynchronous notification of the task's outcome. To support this functionality, a new
completeDelegatedTask
method has been added to theInfiniticClient
. -
InfiniticWorker now offers new methods that allow for the programmatic registration of services and workflows, bypassing the need for configuration files. While initially used for internal testing, this feature can also be beneficial in scenarios where using configuration files is impractical.
🚨 Breaking changes
-
The
context
property of theTask
singleton, which was accessible during task execution, has been removed due to its redundancy with other properties. -
In workers,
- the method
registerService
has been replaced by 2 methodsregisterServiceExecutor
andregisterServiceTagEngine
. - the method
registerWorkflow
has been replaced by 3 methodsregisterWorkflowExecutor
,registerWorkflowTagEngine
, andregisterWorkflowStateEngine
.
- the method
-
The following libraries are no longer exposed by Infinitic. If you were using them, you must now add them to the dependencies of your project:
org.jetbrains.kotlinx:kotlinx-serialization-json
com.jayway.jsonpath:json-path
com.sksamuel.hoplite:hoplite-core
🔬 Improvements
- Infinitic has been updated to use UUID version 7. These are sortable UUIDs that include a timestamp, which is expected to enhance performance when used as primary keys in databases.
- Idempotency: In scenarios where hardware or network issues occur, there's a possibility that the same tasks may be processed multiple times. Ultimately, it falls upon the user to ensure tasks are designed to be idempotent as required. Starting from version 0.13.0, the
taskId
can be reliably used as an idempotent key. This is because Infinitic will generate the same value fortaskId
, even if the task creation process is executed repeatedly. - Performance Improvement: Prior to version 0.13.0, initiating a workflow involved sending a message to the workflow engine, which would then create an entry to store its state in the database. Following this, it would send another message to commence the workflow execution in order to identify and dispatch the first task. This task information would be relayed back to the engine for dispatch. The drawback of this approach was evident during surges in workflow initiation (for example, 1 million starts), where Infinitic had to sequentially store 1 million state entries before beginning to process the first task. This could significantly delay the start of task processing in practical scenarios. Since the release of version 0.13.0, the execution process has been optimized. Now, the first task is processed immediately upon dispatch by all available workers, substantially reducing the "time to first execution."
- Worker Graceful Shutdown: Infinitic is designed to ensure no messages are lost and that workflow executions continue under any circumstances. However, prior to version 0.13.0, shutting down a worker could result in a significant number of duplicated messages or actions. This was because the worker could close while still sending multiple messages. With the introduction of version 0.13.0, workers now attempt to complete any ongoing executions before shutting down, with a default grace period of 30 seconds. This duration can be adjusted using the new
shutdownGracePeriodInSeconds
setting in the worker configuration. - Worker Quicker Start: Upon startup, a worker verifies the existence of the tenant, namespace, and necessary topics for the services and workflows it utilizes, creating them if necessary. Previously, this setup was performed sequentially. Now, it is executed in parallel, significantly reducing startup time, especially in scenarios where a worker is responsible for managing a large number of tasks or workflows.
🪲 Bug fixes
- Fix false warning about topics being partitioned
- Fixed the behavior of
getSecondsBeforeRetry
, which defines the task retry strategy. When the value is less than or equal to 0, retries will now occur immediately. Previously, no retry would be attempted in this scenario. - If the
methodId
is not specified when usingCompleteTimers
client method, all timers of the workflow will now be completed. Previously, only the timers on the main method were completed in the absence of a specifiedmethodId
.
v0.12.3
🚀 New features in worker's configuration file
- A new configuration option
maxPoolSize
has been introduced to the MySQL storage configuration. This option allows you to specify the maximum number of connections in the connection pool. - The
tagEngine
setting can now be configured underserviceDefault
. - The
tagEngine
andworkflowEngine
settings can now be configured underworkflowDefault
.
🚨 Breaking changes
- The entries
service
andworkflow
in the worker configuration, which were used to establish default values for services and workflows, have been renamed. The updated names areserviceDefault
andworkflowDefault
, respectively.
🔬 Improvements
- Not being able to check tenant / namespace does not trigger an error anymore.