-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
diff
panics while diffing two files
#74
Comments
@SimoneLazzaris will try to repro. as soon as I can which may not be until next week. For sanity, can you run using the trace (--trace or -t) or even debug (--debug) and capture the full stdout/stderr here? |
@SimoneLazzaris Just wanted to let you know I just was able to try this out on my local machine and saw the indications of some infinite loop (not waiting for a seg. violation and killed it) given that the files are relatively small in size. Having validated both and run other commands against its contents, it is my fear that the error may lie in one of the imported libraries which may take some time to pinpoint (and even more time to perhaps fix upstream if possible). |
@SimoneLazzaris Comparing these 2 "Trivy" SBOM files using an online, general text diff tool, it finds 2694 removals and 8296 additions. I am sure that the underlying diff comparator is losing its mind and running out of memory looking for JSON objects matches between the two files (which uses deep hashes). The warning for "diff" command use is that the files must be relatively similar... despite being binary image scans from Trivy from apparent image file minor point revisions (semantic image file versions) these files are completely dissimilar from one primary aspect:
The generalized "diff" libs used are not specific to any schema (SPDX, CycloneDX or any other JSON) and have no means to "normalize" the contents prior to comparison. In fact, normalization of JSON is only possible with custom knowledge of what makes each array entry (esp. for anonymous types) unique (i.e., a unique key or set of fields that create a unique key) to hash by reliably (relative to the JSON object). In addition, within each component there are several Trivy custom properties that are identical across many components which makes comparison (similarity weighting) near impossible. For example:
where the similarity "score" would be high between these 2 components (with no knowledge the the only key in the case This complexity is the reason why "merge" functions are not simple from any tool (even github commits of similar files) and require human (not yet AI) analysis to resolve "merge conflicts". If a great deal of custom hashing code were written (which means a unique hash function per-object in the JSON schema), then normalization becomes more realistic. However, objects with any depth of nested objects increases the time necessary for deep comparisons of objects (as well as lots of hashing memory overhead). |
All hope is not lost... as I said I planned on adding a "sort" function with knowledge of CycloneDX data schema structure (at least for top-level objects like |
@mrutkows thanks for your effort. I know that mine was not a textbook example. I just find bad for the software to panic and wanted to report that. |
The problem lies in the
as well as another function
Short of a rewrite (as the lengths being passed in the patches are not being calculated properly, clearly) and perhaps the introduction of a "safe" slice reallocation routine, this will not be fixed anytime soon. |
I can "catch" the panics; but, that really does not solve the underlying library I relied upon from working properly to produce a diff :( I truly believe that normalizing the data (both files) will result in coherent/useful diff results (even using the faulty library), but it is a very complicated task in-and-of itself. |
@SimoneLazzaris I managed to add "guard rails" to avoid the panics within the upstream file that was the source (of more than one) panic where they were accessing string slices past their current size/memory allocations... The result is a large "patch" file that really has many large blocks of meaningless deltas: The source code file I patched locally to avoid the panics is from the upstream library (i.e., "patch.go"); specifically, I added "if" tests before indexing into string slices in 2 places: but again, even if I pushed this to upstream, the library has not been touched in like 6 years... and it absolutely masks other "bad" logic that is creating the leading to the bad character counting used to index into the slices that cause the panics. |
@SimoneLazzaris checkout the cleaner output if such a panic is now caught:
In addition, the exit code is now |
Hi,
I'm trying to compare two SBoMs generated with two different versions of trivy.
sbom-utils thinks hard for a bit and then panics with
panic: runtime error: slice bounds out of range [2004:1743]
Here are the files:
nats-box-49.sbom.json
nats-box-50.sbom.json
And this is the command line I've used:
The text was updated successfully, but these errors were encountered: