-
Notifications
You must be signed in to change notification settings - Fork 0
Feature_ _QMF_HTTP_Bridge
The basic idea is that there is a dedicated ‘gateway’ that is an http service on one side, and talks to one or more qmf consoles on the other. This is a refactor of something we already had working, so is not starting from scratch. The bulk of the work will be in making it generic and usable by any console/http client. This allows any service to:
- Send events via qmf and not have to worry about converting those events into http requests to be sent to the api of a web app. The http request/put/whatever is done by the gateway.
- Send requests from a web app to a service via a standard http request, which is already easily done from the web app. The request is processed with a thin wrapper to call the appropriate method on the agent via the embedded console.
- Not have to worry about supporting two APIs, one for QMF, one for HTTP.
- Not have to receive HTTP requests and prcess them, everything can be done in the native QMF API.
This effectively allows 2-way communication between any web app and any qmf-exposed service, while keeping that gateway configurable to be clustered/proxied/whatever-is-needed-by-sysadmin. There are 2 variations how this could be deployed (and we may not have to even decide this right away):
\1. Set up this gateway with conductor (ie, on same box or network).
This would have a level of http auth (cert, krb, w/e), and a level of
auth/config needed for qmf (similar auth options). The possible added
complication here would be configuring the qpid domain so to console is
able to find an agent and register for events (say, if the agent is on
some sandboxed network location). Again, this may be a minor config
issue, as I know you can do all kinds of fancy things with qmf for
these
kinds of scenarios.
\2. Set up the gateway inside the pacemaker cloud (different box/network from conductor). All the auth bits should be the same here, but different config - conductor would need to be reachable in this case via http in some way, so I can envision environments where this might not work.
-
The ‘dial back’ url of the caller could be sent as part of the request to the Bridge. So, say conductor calls ’/launchdeployable’, it can pass in ‘https://conductor/instances/12345/events’ as the place where it wants to receive events for this specific thing. A base url for conductorcould_ be in a config file, but it may be preferable to specify it in this way. This has the added benefit of allowing different apps to register their own callbacks.
-
The caller should know what agent they need to communicate with (factory, pacemaker, audrey, etc). This could also be passed as part of the request, allowing the bridge to dynamically spin up new consoles as needed. So a client might pass as part of each request ‘vendor:com.redhat;product:pacemaker-cloud’ or similar.
-
Our current console Handler setup is designed to use the same callback host for all handled events. This would need to be change to support the above case where the client passes in a url where they wish to be notified
Jason Guiditta
- Targeted release: Aeolus 0.4.0
- Last update: 2011
0805 - Percentage of completion: 0%
TBD
This will be broken into 2 phases:
* Make the imagefactory-console a generic console that will work with
any agent that exposes a schema (#1980)
* Bring back aeolus-connector and preform similar surgery to make it
usable with the generic console and dynamically expose console methods
via http. (#1975)