-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional information on Evaluate #533
Comments
From a user looking at lula evaluate output in CI, this would be super helpful to either expand the default information or provide a debug/verbose mode. For reference my current process to understand a failing evaluation is:
If |
@mjnagel - thanks for the feedback! Yeah agreed this process is not great and is like trying to untangle a spaghetti of uuids. The draft PR I'm working has the output listing all different observations and all new or missing ones for each failed control: From a user perspective, if you have thoughts on how to make this more informative, cleaner, whatever lmk! Just trying to get the information in there as a first cut at solving this traceability problem. |
Yeah I think that's a super helpful start. Two things that I would find useful:
This looks great though and is exactly what I was hoping to see! |
Yep, this is supposed to just show the ones that changed, that formatting is probably confusing - but the first in the list is like the old version (where it was passing + the "evidence" and the second is it's pair that is now failing - I feel like I could probably rearrange this to make it more obvious that's what's happening. On the validation part, are you thinking you'd like to see the rego? and/or the input data? We have some other issues open to add like "human friendly" readme + composing that in the OSCAL, but that's something we would still need to implement but could possibly add the raw rego though... |
Gotcha, maybe my examples are limited but I'm thinking most of the time I'd only care about the new evidence section, not necessarily seeing the "passing" version?
I think the rego could be useful, but probably should should be part of a verbose mode flag or something like that since it could be a lot to log out by default. It's mostly just in the line of thinking "well this failed, I see the validation error message, but was that because the check needs an update?" A good example related to your screenshotted validation - if we switched from our current sidecar implementation to a "native sidecar" implementation I'd expect that to fail the validation. The message would just say "Istio sidecar proxy not found..." but in reality the rego check is really the "problem"/thing that needs to be reviewed. |
Is your feature request related to a problem? Please describe.
When evaluate runs, and fails, only the control IDs are output. More information would be helpful for debugging what actually failed, since the control satisfaction is often a function of many observations.
Describe the solution you'd like
lula evaluate
failsAdditional context
Thinking a decent solution would be:
relevant-evidence.description
values (i.e., the result: not-satisfied/satisfied)This is just my idea for evaluate, but perhaps this rolls into a larger function to interrogate the assessment results more granularly and specific to individual controls.
The text was updated successfully, but these errors were encountered: