-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance instrumentation for individual and overall constraint processing #314
Comments
@wandmagic I created this issue based upon our discussion in standup today about more precise perf counters. |
This would be really handy especially as we scale up ssp data size |
How should this work? We need to get to some form of a spec we can implement. |
Agreed. I was looking into instrumentation systems for Java applications. |
we could just have a flag for --instrumentation in the CLI (or a flag to turn it off --skip-instrumentation)
|
Before extending SARIF data and reinventing the wheel, one very rough (not so granular) data source we could tap into (but current do not) is the JUnit/Surefire reports we could store, but do not, in GitHub or elsewhere given we use that plugin with Maven. That said, it only tells us what the macro-level "I ran this test that calls of this other code across modules in one or more function calls," and nothing more granular, like I said. I have been researching this on and off all morning and found nothing very compelling about time measurement and profiling, but I will have to read up on this area. That said, if we could find a way outside of m-j and oscal-cli code to actually annotate code with time runs in SARIF since we know what code paths are used for tests, and annotate function calls that exceeds a thresshold or deserve investigation, we may be onto something I think no one else is doing (open source or on the inner source proprietary side, I'll have to ask; no one has ever hinted they do something like that, so we would be trendsetters). |
User Story
As a developer of Metaschema-enabled software, models, and data, I would like performance instrumentation to measure individual constraints and overall model and external constraint processing to determine hotspots, performance bottlenecks, and areas for improvement.
Goals
Dependencies
No response
Acceptance Criteria
Revisions
No response
The text was updated successfully, but these errors were encountered: