Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add simple API for generating testharnesses inline #4629
base: main
Are you sure you want to change the base?
Add simple API for generating testharnesses inline #4629
Changes from 19 commits
db99d18
d96675c
4ffc80c
c9b887b
80f5a31
90f68a6
c18e660
a46103d
5e5bdd4
60f2e5f
a1a1a7f
502e0e9
1436ea2
d8aaaa7
3b24f28
941cc4b
0488111
2e715dd
77291c9
81202f6
92e208b
abe0c6b
f7b5887
6cf7428
70b5b40
659adc9
6760320
fb52653
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is almost a case class. Any reason to not make it one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I originally had it that way, but according to @jackkoenig that is a problem for binary compatibility. I don't fully understand the reason for that, I'll admit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is clock and reset sufficient? @fabianschuiki has brought up some points about: (1) knowing whether or not the test is done and (2) knowing what the exit status is.
Does this motivate a moderately more heavyweight interface of:
Alternatively, maybe there are two kinds of tests:
Note: if the
stop
is dropped from (1), then (1) becomes purely a subset of (2).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes great point. This was my main motivation for making it easy for users to override the testharness generator and result from each test. I imagine each project might have their own contract with a test driver to communicate this information. I agree that
done
/exitCode
is the obvious way to do that and more or less required for a sensible unit testing setup.Since I do think most projects will settle on some testharness-testdriver contract to communicate done/pass, think it comes down to whether we prefer to ship an example implementation of this by default, or expect users to implement their own if they want it.
I landed on not adding done/pass because it felt less "presumptuous" of the users simulation environment. But I'm open to arguments that that comes at the cost of usability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why I'm leaning towards it is that we want to have the ability to do both test-discovery and test-execution for any test defined this way. The part which is missing is that we can do test-discovery with some small changes to FIRRTL to add a
simulate
statement/region similar to how we have aformal
statement/region (see: https://github.com/chipsalliance/firrtl-spec/blob/main/spec.md#formal-unit-tests). Essentially, these tests just need a marker to indicate that they are tests and not just modules.However, while
formal
has enough information to tell it also how to run the test (because a formal test is both simpler and the formal construct is powerful enough to capture stimulus viaassume
),simulate
does not have enough information to run the test. Even in the simplest formulation of just havingclock
/reset
, there are assumptions about how to drive the clock and the reset.It's kind of like defining an entry point for a program (C/C++'s main function
void main(int argc, char ** argv)
(or the equivalent in other languages) comes to mind as does function calling convention).FWIW,
stop
does include an exit code, so perhaps that is actually sufficient to use with only theclock
/reset
formulation. That actually sounds good.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. If this is not blocking feedback, I will leave it for now.
I think it is a reasonable expectation that these testharness modules are not the top when we actually compile the simulation--there will be a test-driver layer above that generating clocks and resets, writing waves, etc. So, I think what we are looking for is an ABI between the test-driver and the testharness.
Perhaps testharnesses that can be
simulate
d must implement this ABI. At which point, circt-test could have its own intrinsic test-driver that works well with verilator and has sane/configurable logic for clock/reset generation.