Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: implement grpc reconnect for inprocess mode #1150

Merged
merged 7 commits into from
Jan 14, 2025

Conversation

warber
Copy link
Contributor

@warber warber commented Jan 10, 2025

This PR

This RP leverages the newly introduced GrpcConnector for (re)connection handling, including in INPROCESS mode. As a result, reconnection behaviour is now fully managed by the built-in gRPC library capabilities. Custom connection handling code (such as the backoff package) has been completely removed. Additionally, minor enhancements have been made to the state monitor code (ChannelMonitor), and test coverage has been improved.

With this PR, the Java flagd provider now conforms to the reconnect event handling specification in both rpc and inprocess modes.

@warber warber force-pushed the feat/grpc-reconnect-in-process branch 4 times, most recently from 37a13ec to 3f774f4 Compare January 10, 2025 12:46
@warber
Copy link
Contributor Author

warber commented Jan 10, 2025

So this implementation "almost" works, as it conforms to the specification by emitting stale and error events on connection loss. However, there's one issue: when disconnecting flagd, a STALE event is sent immediately (triggered by the common grpc connector), which is correct. However, an ERROR event is also emitted (triggered by this line). This can be fixed by simply removing this line. Given that the responsibility of triggering connectivity events now lies more with the common GrpcConnector component, does it still make sense to maintain a state inside the Flagstore? So i am actually thinking whether it can be simplified to just give back flag data if it got some from the stream and that's more or less it. wdyt @toddbaert 🤔

@warber warber changed the title feat: implement grpc reconnect for inprocess mode (WIP) feat!: implement grpc reconnect for inprocess mode (WIP) Jan 13, 2025
@warber warber force-pushed the feat/grpc-reconnect-in-process branch from 931f6b4 to 6b9b10d Compare January 13, 2025 11:43
@warber warber marked this pull request as ready for review January 13, 2025 12:15
@warber warber requested a review from a team as a code owner January 13, 2025 12:15
@warber warber changed the title feat!: implement grpc reconnect for inprocess mode (WIP) feat!: implement grpc reconnect for inprocess mode Jan 13, 2025
@toddbaert
Copy link
Member

toddbaert commented Jan 13, 2025

So this implementation "almost" works, as it conforms to the specification by emitting stale and error events on connection loss. However, there's one issue: when disconnecting flagd, a STALE event is sent immediately (triggered by the common grpc connector), which is correct. However, an ERROR event is also emitted (triggered by this line). This can be fixed by simply removing this line. Given that the responsibility of triggering connectivity events now lies more with the common GrpcConnector component, does it still make sense to maintain a state inside the Flagstore? So i am actually thinking whether it can be simplified to just give back flag data if it got some from the stream and that's more or less it. wdyt @toddbaert 🤔

As long as the storage state us up-to-date when the GrpcConnector emits it's events, this makes sense to me. We just want to make sure that, overall, the provider is ready to evaluate flags the instant a READY event is fired.

Copy link
Member

@toddbaert toddbaert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This all makes sense to me, what a great reduction in LoC, overall 🙏

I would merge this as is, but I think that your point here makes sense, and I think it can be done without issue and will result in better separation of concerns. If you'd rather do in a subsequent PR, we can just merge this as is.

@warber
Copy link
Contributor Author

warber commented Jan 14, 2025

If you'd rather do in a subsequent PR, we can just merge this as is.

@toddbaert yeah i intentionally left it out of this PR to keep it focused.
I think this, as well as the potential race between initialization and the initialized flag can be done separately

@toddbaert toddbaert merged commit d2410c7 into open-feature:main Jan 14, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants