Replies: 4 comments
-
Hi everyone, I wanted to start a discussion organized around user stories for the work we're doing in jupyter/accessibility/testing/jupyterlab. I've labeled areas that need more definition or attention with NEEDS ATTENTION. |
Beta Was this translation helpful? Give feedback.
-
Thanks for starting this! I agree with the groups of users you've described so far and am not finding any gaps in the workflows as far as I can tell. I feel least clear on the GitHub Actions section, but given that's the areas you labeled as "Needs Attention" I'm guessing you know. I can only give some ideas on the output report question. I think there's a few options I've seen for GitHub Actions reporting in the wild, and I think the type/amount of we get from these test will choose for us what way is optimal. I've seen
Is that helpful? I know it doesn't cover all our use cases. I would like to add two more potential user stories. Feel free to tell me if this is out of your intended scope though, because these are less development focused. As a maintainer, I want to:
As a JupyterLab user, I want to:
|
Beta Was this translation helpful? Give feedback.
-
Thanks Isabela for highlighting the issue of permissions. When we generate an accessibility test report against a specific commit (or PR), we need to make sure that the results are maximally available. Based on the options you listed, it seems to me that putting the test report as a comment on the PR is probably the right way to go. I'm glad you added user story (5) for maintainers. I think we're already thinking along those lines. You've probably heard @tonyfast mention nightly runs and cron jobs. If we run the tests regularly and put the results somewhere that maintainers know and care to look, I think that would address this user story. I also really like user story (6) because it will force us to think about the accessibility of the outputs of the automated tests. In other words, we'll be generating reports based on our automated tests, and we want to eliminate barriers to those reports. One thing I should note, though, is that the latest test report will probably never be enough for the user in your story to fully achieve their goals. I kind of imagine the accessibility statement to be the better tool for them. I think there's some connection between the automated tests and the accessibility statement, but we haven't fully worked that out yet. At a minimum, I think we should link to the latest test report from the accessibility statement. What are you thoughts? |
Beta Was this translation helpful? Give feedback.
-
Replying to @gabalafou
I agree. But I think this is the most specific description we will have any time soon, since the accessibility statement, as far as I know, is more high-level. For example, they usually describe the overview while calling out known failures or areas outside the maintainer control, but you don't get feedback on any details. While I personally expect people to check an accessibility statement before test reports, I still think we should consider making the reports approachable for more than just the maintainer/contributor community.
That sounds good to me. The draft statement currently references the tests, so they even have a place to link. |
Beta Was this translation helpful? Give feedback.
-
User stories jupyter/accessibility/testing/jupyterlab
As a developer, I want to:
As a reviewer, I want to:
As a maintainer, I want to:
Jupyter Releaser so
that test results can be added to release notes without having to do any
manual steps
We have at least three tools that we can use to address these user stories:
Gitpod
Gitpod is intended for developers. As such, it only needs to address user stories (1) and (2).
For (1), something like the following should work:
For (2), a developer would:
test()
block to an existing test fileyarn test
console.log
should workGithub Actions
Github Actions are intended for developers, reviewers, and maintainers.
It needs to address user stories (1), (3), and (4).
For (1) and (3), the steps would look something like:
NEEDS ATTENTION: For (4), we need to do more research.
Local Dev
For both (1) and (2) the developer would need to first clone and install our testing utils.
Assuming the developer already has Node.js and Yarn installed:
Then, for (1):
For (2), after installing the dependencies (yarn install and npx playwright install), the steps are the same as for Gitpod:
test()
block to an existing test fileyarn test
However, debugging will be much easier locally because the developer will be able to the Playwright debugger:
Beta Was this translation helpful? Give feedback.
All reactions