-
Notifications
You must be signed in to change notification settings - Fork 124
Add --tr-testrun-refs option #155
Comments
Hi @mickyJNST |
As a workaround, you can create necessary test run directly and then run pytest with option |
Yeah - tried this but ran into other issues (as described in the Describe alternatives you've considered section of original post) |
Your testrail.cfg should look like this:
Then you can run your tests using following command: |
Thanks for the explanation @voloxastik. Unfortunately your proposed solution doesn't fit with how we have our workflow set up as we're trying to do everything via the testrail.cfg file...and we don't want to have to do a combination of file and pytest command line options. In any case, we have a very roundabout workaround in place that will do for now, but would like to see support for the test run |
@mickyJNST |
@voloxastik I want to associate the test run with Jira tickets. This is supported by the TestRail API via the |
@mickyJNST |
Is your feature request related to a problem? Please describe.
TestRail allows for integration with Jira via the
refs
parameter in test runs. pytest-testrail doesn't currently support setting therefs
param (which is just a comma delimited string).Describe the solution you'd like
Add
--tr-testrun-refs
option to existing list of pytest-testrail optionsDescribe alternatives you've considered
[Third edit:-)] To work around this limitation, we tried creating the test run (including the
refs
parameter) in TestRail directly via the TestRail API (before invoking our pytest-based tests), and then writing therun_id
to the testrail.cfg file using the test run id we get from the TestRail API. Our testrail.cfg file looks like this:However, this workaround didn't work. We get the following error at the end of running the pytest suite:
Also, adding the project_id resolves the error, but then results in a new run being created instead of the specific run being used:
Furthermore, even if the above approach did work, I think we would run into another issue. I think pytest-testrail assumes the test run already has test cases set (with status "Untested") when publishing results. However, using above approach means we are creating an empty test run (because we are trying to avoid creating a mechanism to determine what all test case ids are in our test suite). If this assumption is true then publishing the results would fail as the tests wouldn't yet exist in the test run in TestRail. To handle this use case pytest-testrail could update the test run with known test cases at the start of the publishing process if the test run is empty (via
update_run
endpoint).The text was updated successfully, but these errors were encountered: