-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Externalize BOM ingestion pipeline #633
Externalize BOM ingestion pipeline #633
Comments
Decoupled the centralized tracking of processing status(es) into #664. Deprioritizing this issue. |
Project Koala / Transparency Exchange API will not arrive anytime soon, so we need to evaluate alternatives. I don't think we should introduce a generic blob storage for this yet. Instead, we might want to consider storing uploaded BOMs in a new table, as BOMs can be arbitrarily large. While Postgres compresses large values, we still need to send all that data over the wire twice (once for storage, once for retrieval). The default compression is also not particularly good. We already bring in |
Relates to #633 Signed-off-by: nscuro <[email protected]>
Relates to #633 Signed-off-by: nscuro <[email protected]>
At the moment, processing of uploaded BOMs is happening entirely in-memory.
BomUploadProcessingTask
s are enqueued to the internal task queue (see https://github.com/DependencyTrack/hyades/blob/main/WTF.md#why), and processed by theEventService
thread pool.The current design has some downsides:
The proposed enhancement involves storage of uploaded BOMs in a Koala-compatible system (e.g. the CycloneDX BOM Repository Server), and publishing "BOM uploaded" events to Kafka. Consumers (API server or specialized workers) consume from the Kafka topic, and perform the actual ingestion into the database.
We need to look into proper AuthN / AuthZ for the Koala service. The BOM repo server does not have those built-in.
Focusing on the client-side a little more, existing workflows should still continue to work:
The text was updated successfully, but these errors were encountered: