-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposing OSCAL data with OpenTelemetry #2039
Comments
Does anyone know if OSCAL has any plan to integrate with OpenTelemetry? Thanks! |
@gyliu513 - At this time, NIST does not have a plan to support OTel, but if the community is interested in researching this topic, we can support it. OpenTelemetry provides a common framework for collecting telemetry data and exporting it to an Observability back end of your choice. It uses a set of standardized, vendor-agnostic APIs, SDKs, and tools for ingesting, transforming, and transporting data. Since the telemetry data consists of the logs, metrics and traces collected from a distributed system, I am assuming your are proposing OTel for the assessment results collection and insertion into the OSCAL Assessment Plans and/or POA&Ms? |
@iMichaela Yes, this is what I was hoping we can integrate, any suggestion for this? Thanks |
@iMichaela do you know if there are any tools which can be used to collect OSCAL data automatically? Thanks |
This came up in a FedRAMP implementers meeting and sounds interesting (at least personally to me, I am one of those OTEL people in personal lab environments from time to time with Prometheus and Grafana). Do you have an idea of what kind of security information you would want to see and how it relates to security controls for a notional system?
As someone who reviews a lot of tools and integrations, I have not seen any yet, but that is why I asked the previous question. |
My assumption - per communication above - was that the intention is to collect logs, metrics, and traces/evidence required. It should match the information planned to be collected for control assessments, to satisfy the regulatory framework requirements. @aj-stein-gsa - if you recall the ATARC pilot, I envision the need for providing inputs to guide the outputs. Personally I need to do more reading, but I am also very interested in researching it . It would be a great OSCAL research topic. I am going to raise it with CNCF OSCAL WGs as well. |
Thanks @aj-stein-gsa and @iMichaela for the discussion here, really helpful. Let me share a use case here: Suppose I have a VM, and I was using otel collector to collect some metrics for this VM, like VM name, cpu, memory etc. I also want to get some OSCAL assessment results for this VM as well, and then do correlation for those data to show the customer an overview for this VM entity. But if we can provide a solution of using otel to collect data for OSCAL as well, then we can probably define a unified data collector layer and data correlation layer to handle this request. An example for a VM otel metrics data and OSCAL data as below, hope this helps. An example of OSCAL Security Plan {
"system-security-plan": {
"metadata": {
"title": "Virtual Machine System Security Plan",
"last-modified": "2024-09-04T00:00:00Z",
"version": "1.0",
"oscal-version": "1.0.0"
},
"system-characteristics": {
"system-name": "Example Virtual Machine", // VM name here
"system-description": "This is a virtual machine running critical applications.",
"system-information": {
"system-type": "Virtual Machine",
"system-host": "VMware ESXi",
"operating-system": "Ubuntu 22.04 LTS"
}
},
"control-implementation": {
"implemented-controls": [
{
"control-id": "AC-2",
"description": "Implement access control for the VM.",
"responsible-roles": ["VM Administrator"]
},
{
"control-id": "SI-7",
"description": "Ensure the integrity of VM's software and updates.",
"responsible-roles": ["Security Officer"]
}
]
}
}
}
And then I got assessment result for my VM as below with OSCAL {
"assessment-results": {
"metadata": {
"title": "Virtual Machine Assessment Results",
"last-modified": "2024-09-04T00:00:00Z",
"version": "1.0",
"oscal-version": "1.0.0"
},
"results": [
{
"control-id": "AC-2",
"status": "satisfied",
"findings": "User access control measures are in place and effective."
},
{
"control-id": "SI-7",
"status": "partially satisfied",
"findings": "Software integrity checks are in place, but one outdated package was found."
}
]
}
} And get OSCAL AD as following: {
"authorization-decision": {
"metadata": {
"title": "Virtual Machine Authorization Decision",
"last-modified": "2024-09-04T00:00:00Z",
"version": "1.0",
"oscal-version": "1.0.0"
},
"authorization-result": {
"decision": "authorized with conditions",
"description": "The VM is authorized for use, but the outdated package must be updated within 30 days.",
"justification": "No critical vulnerabilities were identified, but some remediation is required."
}
}
} Here is the data of the VM that I get from otel {
"resourceMetrics": [
{
"resource": {
"attributes": [
{"key": "vm.name", "value": "Example Virtual Machine"}, // VM Name here
{"key": "host.name", "value": "vm-host-01"},
{"key": "os.type", "value": "linux"}
]
},
"scopeMetrics": [
{
"metrics": [
{
"name": "vm.cpu.usage",
"description": "CPU usage of the VM",
"unit": "percentage",
"dataPoints": [
{"timestamp": 1693804800, "value": 55.3}
]
}
]
}
]
}
]
} After correlation, the VM data will be as following: {
"vm.name": "Example Virtual Machine",
"oscal-controls": {
"AC-2": {
"status": "satisfied",
"description": "User access control measures are correctly implemented."
},
"SI-7": {
"status": "partially satisfied",
"description": "Software integrity checks are in place, but one outdated package was found."
}
},
"otel-metrics": {
"cpu.usage": "55.3%",
"memory.usage": "2GB",
"network.throughput": "150Mbps"
},
"otel-traces": [
{
"trace-id": "1234567890abcdef",
"span-id": "abcdef1234567890",
"operation": "vm-login",
"status": "ok",
"start-time": "2024-09-04T10:00:00Z",
"end-time": "2024-09-04T10:00:05Z"
}
],
"otel-logs": [
{
"timestamp": "2024-09-04T10:00:00Z",
"log-level": "info",
"message": "User admin logged into VM."
}
]
} |
@iMichaela do you have some meeting notes or github links for CNCF OSCAL WGs? Thanks |
Neat, so you essentially want custom metrics to consume with an OTEL collector with perhaps a custom receiver? |
@aj-stein-gsa Yes, but maybe not only receiver, but also processor, as there maybe some semantic convention required in the processor. We probably need a |
@jflowers leads the cncf/tag-security OSCAL Norms project. |
There are few comments I have to the data sample you provided, which might be fundamental to the problem. I'll put aside for the moment the incorrect OSCAL structure, I am only looking at the data you are trying to convey The The In the example, you are providing otel metrics, but similar metrics might be already defined by different authorities (FedRAMP, CSA STAR, etc) and might need to be mapped or used as inputs... I hope this is all doable .. The reason for calling on CNCF experts. |
Thanks @iMichaela , what I provided is just an example to clarify my use case, there maybe some errors, but please ignore that. :)
This is a good point. Yes, we can get same data from different sources, I think that is why we need semantic convention and data correlation to mitigate those issues. |
And most likely, enforced control metrics will need native support in OSCAL or use of a registry of extensions, otherwise tools might not know how to extract the information and use it (pass it as input , use it for the final AD, etc) automatically. Just for keeping records together, here is the CSA/cloud-audit-metrics project (their own JSON schema to align with the |
What is the proposed change to OSCAL? I am not clearly seeing the need to change OSCAL. This seems like an effort to develop a tool that will make use of assessment result data in OSCAL formats. Do these efforts belong in this repository, or should a separate repository be started to accomplish this? OSCAL is a set of data structures. OpenTelemetry is a set of tools for measuring the performance and behavior of software. The way I am understanding the discussion, it seems the way forward is to work telemetry outputs into the software applications that consume/create OSCAL-structured data, not modify the OSCAL structures themselves. edited for grammar |
@ogijaoh Thanks for the comments, totally agree with you. I can see you are working for https://github.com/defenseunicorns/lula, and it can Generate machine-readible OSCAL artifacts, seems this can be used as a source for generating OSCAL data, and we need to build a oscalreceiver to get those data. Comments? Thanks |
@ogijaoh - You are absolutely right, and to my understanding, the proposal, as clarified after some discussion (see above) is focusing on the ability to use OSCAL assessment plan information into open telemetry and the outputs of otel tools into OSCAL assessment results allowing the software applications that consume/create OSCAL-structured data to consume the information. |
@gyliu513, I don't work for Lula, but I have worked with it. Using Lula as a use case, what content would you want created/accessed for OTEL? I have some familiarity with the processing of logs with Promtail, Loki, and Grafana (PLG). I'm not quite sure I understand your question, but I'll take a stab at describing the challenge as I see it to develop something like this. Please let me know if this is off the mark, or if OTEL provides other capabilities than the ones I am referencing in PLG. If Lula were going to give us logs, thought would need to be given to how Lula would generate these logs for the different capabilities Lula has. For instance, Lula's current output when validating system configuration/status against an expected configuration/status has information such as:
To use Loki / Grafana in the way I'm familiar with their use, Lula would have to output quite a few logs for each validation run in order to provide the full breadth of information it provides with its standard results output. Going away from logs for a minute...if the goal is to see a more visual display of what happened with a validation, or to see the latest validation results for a system, would it be possible to just write something that reads the assessment results file produced by Lula and displays the results? I don't know if this would be something done with a custom receiver or just by adding an assessment results file (or a repository of related OSCAL-structured files) as a data source, and then parsing that data source. |
@iMichaela, copy that on allowing these conversations to start in this space. To your first statement, this is different than how I was thinking about the problem. I was considering the use of OpenTelemetry capabilities as a means for understanding what has happened with assessments (see my response to @gyliu513 in this comment: #2039 (comment)). If I understand you correctly, your interpretation of the above conversation is taking information in OTEL and creating OSCAL-structured content for consumption/use elsewhere. So not an attempt to understand the telemetry of tools in the OSCAL ecosystem, but rather translating content from OTEL formats into OSCAL-structured formats. Is that right? |
User Story
As an OSCAL user, I want to expose all of the OSCAL data in OTEL format and see all of the data via some otel backends, like grafana etc.
Goals
Enable OSCAL can embrace the OTLP protocol and expose its data to different platforms.
Dependencies
No response
Acceptance Criteria
(For reviewers: The wiki has guidance on code review and overall issue review for completeness.)
Revisions
No response
The text was updated successfully, but these errors were encountered: