Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(evaluate): add observation details #540

Merged
merged 14 commits into from
Jul 26, 2024
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 22 additions & 0 deletions docs/cli-commands/assessments/evaluate.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,28 @@

Evaluate serves as a method for verifying the compliance of a component/system against an established threshold to determine if it is more or less compliant than a previous assessment.

## Usage

To evaluate two results (threshold and latest) in a single OSCAL file:
```bash
lula evaluate -f assessment-results.yaml
```

To evaluate the latest results in two assessment results files:
```bash
lula evaluate -f assessment-results-threshold.yaml -f assessment-results-new.yaml
```

To print a summary of the observation results:
```bash
lula evaluate -f assessment-results.yaml --summary
```

## Options

- `-f, -file`: The path to the file(s) to be evaluated.
meganwolf0 marked this conversation as resolved.
Show resolved Hide resolved
- `-s, --summary`: [Optional] Prints a summary of the evaluation.

## Expected Process

### No Existing Data
Expand Down
2 changes: 1 addition & 1 deletion docs/oscal/assessment-results.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Based on the structure outlined, the results of the observations impact the find
The way Lula performs evaluations default to a conservative reporting of a `not-satisified` observation. The only `satisfied` observations occur when a domain provides resources and those resources are evaluated by the policy such that the policy will pass. If a Lula Validation [cannot be evaluated](#not-satisfied-conditions) then it will by default return a `not-satisfied` result.

### Not-satisfied conditions
The following conditions enumerate when the Lula Validation will result in a `not-satified` evaluation. These cases exclude the case where the Lula validation policy has been evaluated and returned a failure.
The following conditions enumerate when the Lula Validation will result in a `not-satisfied` evaluation. These cases exclude the case where the Lula validation policy has been evaluated and returned a failure.
- Malformed Lula validation -> bad validation structure
- Missing resources -> No resources are found as input to the policy
- Missing reference -> If a remote or local reference is invalid
Expand Down
82 changes: 68 additions & 14 deletions src/cmd/evaluate/evaluate.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,13 @@ package evaluate

import (
"fmt"
"strings"

"github.com/defenseunicorns/go-oscal/src/pkg/files"
oscalTypes_1_1_2 "github.com/defenseunicorns/go-oscal/src/types/oscal-1-1-2"
"github.com/defenseunicorns/lula/src/pkg/common"
"github.com/defenseunicorns/lula/src/pkg/common/oscal"
"github.com/defenseunicorns/lula/src/pkg/common/result"
"github.com/defenseunicorns/lula/src/pkg/message"
"github.com/spf13/cobra"
)
Expand All @@ -20,7 +22,8 @@ To evaluate two results (threshold and latest) in a single OSCAL file:
`

type flags struct {
files []string
files []string
summary bool
}

var opts = &flags{}
Expand All @@ -39,18 +42,19 @@ var evaluateCmd = &cobra.Command{
message.Fatal(err, err.Error())
}

EvaluateAssessments(assessmentMap)
EvaluateAssessments(assessmentMap, opts.summary)
},
}

func EvaluateCommand() *cobra.Command {

evaluateCmd.Flags().StringArrayVarP(&opts.files, "file", "f", []string{}, "Path to the file to be evaluated")
evaluateCmd.Flags().BoolVarP(&opts.summary, "summary", "s", false, "Print a summary of the evaluation")
// insert flag options here
return evaluateCmd
}

func EvaluateAssessments(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentResults) {
func EvaluateAssessments(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentResults, summary bool) {
// Identify the threshold & latest for comparison
resultMap, err := oscal.IdentifyResults(assessmentMap)
if err != nil {
Expand All @@ -69,22 +73,41 @@ func EvaluateAssessments(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentRe
}

if resultMap["threshold"] != nil && resultMap["latest"] != nil {
var findingsWithoutObservations []string
// Compare the assessment results
spinner := message.NewProgressSpinner("Evaluating Assessment Results %s against %s", resultMap["threshold"].UUID, resultMap["latest"].UUID)
defer spinner.Stop()

message.Debugf("threshold UUID: %s / latest UUID: %s", resultMap["threshold"].UUID, resultMap["latest"].UUID)

status, findings, err := oscal.EvaluateResults(resultMap["threshold"], resultMap["latest"])
status, resultComparison, err := oscal.EvaluateResults(resultMap["threshold"], resultMap["latest"])
if err != nil {
message.Fatal(err, err.Error())
}

// Print summary
if summary {
brandtkeller marked this conversation as resolved.
Show resolved Hide resolved
message.Info("Summary of All Observations:")
findingsWithoutObservations = result.Collapse(resultComparison).PrintObservationComparisonTable(false, true, false)
if len(findingsWithoutObservations) > 0 {
message.Warnf("%d Finding(s) Without Observations", len(findingsWithoutObservations))
message.Info(strings.Join(findingsWithoutObservations, ", "))
}
}

// Check 'status' - Result if evaluation is passing or failing
// Fails if anything went from satisfied -> not-satisfied OR if any old findings are removed (doesn't matter whether they were satisfied or not)
if status {
if len(findings["new-passing-findings"]) > 0 {
// Print new-passing-findings
newSatisfied := resultComparison["new-satisfied"]
nowSatisfied := resultComparison["now-satisfied"]
if len(newSatisfied) > 0 || len(nowSatisfied) > 0 {
message.Info("New passing finding Target-Ids:")
for _, finding := range findings["new-passing-findings"] {
message.Infof("%s", finding.Target.TargetId)
for id := range newSatisfied {
message.Infof("%s", id)
}
for id := range nowSatisfied {
message.Infof("%s", id)
}

message.Infof("New threshold identified - threshold will be updated to result %s", resultMap["latest"].UUID)
Expand All @@ -97,19 +120,50 @@ func EvaluateAssessments(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentRe
oscal.UpdateProps("threshold", "https://docs.lula.dev/ns", "true", resultMap["threshold"].Props)
}

if len(findings["new-failing-findings"]) > 0 {
// Print new-not-satisfied
newFailing := resultComparison["new-not-satisfied"]
if len(newFailing) > 0 {
message.Info("New failing finding Target-Ids:")
for _, finding := range findings["new-failing-findings"] {
message.Infof("%s", finding.Target.TargetId)
for id := range newFailing {
message.Infof("%s", id)
}
}
message.Info("Evaluation Passed Successfully")

message.Info("Evaluation Passed Successfully")
} else {
message.Warn("Evaluation Failed against the following findings:")
for _, finding := range findings["no-longer-satisfied"] {
message.Warnf("%s", finding.Target.TargetId)
// Print no-longer-satisfied
message.Warn("Evaluation Failed against the following:")

// Alternative printing in a single table
failedFindings := map[string]result.ResultComparisonMap{
"no-longer-satisfied": resultComparison["no-longer-satisfied"],
"removed-satisfied": resultComparison["removed-satisfied"],
"removed-not-satisfied": resultComparison["removed-not-satisfied"],
}
findingsWithoutObservations = result.Collapse(failedFindings).PrintObservationComparisonTable(true, false, true)
// handle controls that failed but didn't have observations
if len(findingsWithoutObservations) > 0 {
message.Warnf("%d Failed Finding(s) Without Observations", len(findingsWithoutObservations))
message.Info(strings.Join(findingsWithoutObservations, ", "))
}

// Print by individual table
meganwolf0 marked this conversation as resolved.
Show resolved Hide resolved
// noLongerSatisfied := resultComparison["no-longer-satisfied"]
// for id, rc := range noLongerSatisfied {
// message.Infof("%s", id)
// rc.PrintResultComparisonTable(true)
// }
// removedSatisfied := resultComparison["removed-satisfied"]
// for id, rc := range removedSatisfied {
// message.Infof("%s", id)
// rc.PrintResultComparisonTable(true)
// }
// removedNotSatisfied := resultComparison["removed-not-satisfied"]
// for id, rc := range removedNotSatisfied {
// message.Infof("%s", id)
// rc.PrintResultComparisonTable(true)
// }

message.Fatalf(fmt.Errorf("failed to meet established threshold"), "failed to meet established threshold")

// retain result as threshold
Expand Down
120 changes: 69 additions & 51 deletions src/pkg/common/oscal/assessment-results.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"github.com/defenseunicorns/go-oscal/src/pkg/uuid"
oscalTypes_1_1_2 "github.com/defenseunicorns/go-oscal/src/types/oscal-1-1-2"
"github.com/defenseunicorns/lula/src/config"
"github.com/defenseunicorns/lula/src/pkg/common/result"
"gopkg.in/yaml.v3"
)

Expand Down Expand Up @@ -115,14 +116,6 @@ func MergeAssessmentResults(original *oscalTypes_1_1_2.AssessmentResults, latest
return original, nil
}

func GenerateFindingsMap(findings []oscalTypes_1_1_2.Finding) map[string]oscalTypes_1_1_2.Finding {
findingsMap := make(map[string]oscalTypes_1_1_2.Finding)
for _, finding := range findings {
findingsMap[finding.Target.TargetId] = finding
}
return findingsMap
}

// IdentifyResults produces a map containing the threshold result and a result used for comparison
func IdentifyResults(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentResults) (map[string]*oscalTypes_1_1_2.Result, error) {
resultMap := make(map[string]*oscalTypes_1_1_2.Result)
Expand Down Expand Up @@ -177,58 +170,83 @@ func IdentifyResults(assessmentMap map[string]*oscalTypes_1_1_2.AssessmentResult
}
}

func EvaluateResults(thresholdResult *oscalTypes_1_1_2.Result, newResult *oscalTypes_1_1_2.Result) (bool, map[string][]oscalTypes_1_1_2.Finding, error) {
func EvaluateResults(thresholdResult *oscalTypes_1_1_2.Result, newResult *oscalTypes_1_1_2.Result) (bool, map[string]result.ResultComparisonMap, error) {
var status bool = true

if thresholdResult.Findings == nil || newResult.Findings == nil {
return false, nil, fmt.Errorf("results must contain findings to evaluate")
}

// Store unique findings for review here
findings := make(map[string][]oscalTypes_1_1_2.Finding, 0)
result := true

findingMapThreshold := GenerateFindingsMap(*thresholdResult.Findings)
findingMapNew := GenerateFindingsMap(*newResult.Findings)

// For a given oldResult - we need to prove that the newResult implements all of the oldResult findings/controls
// We are explicitly iterating through the findings in order to collect a delta to display

for targetId, finding := range findingMapThreshold {
if _, ok := findingMapNew[targetId]; !ok {
// If the new result does not contain the finding of the old result
// set result to fail, add finding to the findings map and continue
result = false
findings[targetId] = append(findings["no-longer-satisfied"], finding)
} else {
// If the finding is present in each map - we need to check if the state has changed from "not-satisfied" to "satisfied"
if finding.Target.Status.State == "satisfied" {
// Was previously satisfied - compare state
if findingMapNew[targetId].Target.Status.State == "not-satisfied" {
// If the new finding is now not-satisfied - set result to false and add to findings
result = false
findings["no-longer-satisfied"] = append(findings["no-longer-satisfied"], finding)
}
} else {
// was previously not-satisfied but now is satisfied
if findingMapNew[targetId].Target.Status.State == "satisfied" {
// If the new finding is now satisfied - add to new-passing-findings
findings["new-passing-findings"] = append(findings["new-passing-findings"], finding)
}
}
delete(findingMapNew, targetId)
}
// Compare threshold result to new result and vice versa
comparedToThreshold := result.NewResultComparisonMap(*newResult, *thresholdResult)

// Group by categories
categories := []struct {
name string
stateChange result.StateChange
satisfied bool
status bool
}{
{
name: "new-satisfied",
stateChange: result.NEW,
satisfied: true,
status: true,
},
{
name: "new-not-satisfied",
stateChange: result.NEW,
satisfied: false,
status: true,
},
{
name: "no-longer-satisfied",
stateChange: result.SATISFIED_TO_NOT_SATISFIED,
satisfied: false,
status: false,
},
{
name: "now-satisfied",
stateChange: result.NOT_SATISFIED_TO_SATISFIED,
satisfied: true,
status: true,
},
{
name: "unchanged-not-satisfied",
stateChange: result.UNCHANGED,
satisfied: false,
status: true,
},
{
name: "unchanged-satisfied",
stateChange: result.UNCHANGED,
satisfied: true,
status: true,
},
{
name: "removed-not-satisfied",
stateChange: result.REMOVED,
satisfied: false,
status: false,
},
{
name: "removed-satisfied",
stateChange: result.REMOVED,
satisfied: true,
status: false,
},
}

// All remaining findings in the new map are new findings
for _, finding := range findingMapNew {
if finding.Target.Status.State == "satisfied" {
findings["new-passing-findings"] = append(findings["new-passing-findings"], finding)
} else {
findings["new-failing-findings"] = append(findings["new-failing-findings"], finding)
categorizedResultComparisons := make(map[string]result.ResultComparisonMap)
for _, c := range categories {
results := result.GetResultComparisonMap(comparedToThreshold, c.stateChange, c.satisfied)
categorizedResultComparisons[c.name] = results
if len(results) > 0 && !c.status {
status = false
}

}

return result, findings, nil
return status, categorizedResultComparisons, nil
}

// findAndSortResults takes a map of results and returns a list of thresholds and a sorted list of results in order of time
Expand Down
4 changes: 2 additions & 2 deletions src/pkg/common/oscal/assessment-results_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ func TestIdentifyResults(t *testing.T) {
t.Fatalf("Expected results to be evaluated as failing")
}

if len(findings["new-passing-findings"]) == 0 {
if len(findings["now-satisfied"]) == 0 {
t.Fatalf("Expected new passing findings to be found")
}
})
Expand Down Expand Up @@ -443,7 +443,7 @@ func TestEvaluateResultsNewFindings(t *testing.T) {
t.Fatal("error - evaluation failed")
}

if len(findings["new-passing-findings"]) != 1 {
if len(findings["new-satisfied"]) != 1 {
t.Fatal("error - expected 1 new finding, got ", len(findings["new-passing-findings"]))
}

Expand Down
Loading