Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment with test design, using testable statements from manual core-aam tests #15

Open
wants to merge 4 commits into
base: acacia-test-testdriver
Choose a base branch
from

Conversation

spectranaut
Copy link
Collaborator

@spectranaut spectranaut commented Sep 23, 2024

I'm using the core-aam/acacia/testable-statements/ as a temporary directory for these tests. You can see the original "bag of properties" tests in the directory "property-bag".

These tests use a object that defines multiple assertions, all the assertions contained in a single table in the core-aam role and property mapping tables. The assertions are almost identical to the assertions in the "manual" core-aam tests (for example: core-aam/blockquote-manual.html. This PR also adds the logic to execute those test statements on the back end against the API and sends back an array with PASS or Fail: <errormsg> for each assertion.

<div id=test role=blockquote>quote</div>
<script>
AAMUtils.verifyAPI(
  'role=blockquote',  /* test name */
  'test',             /* id of element to test */
  {
    "Atspi" : [
       [
          "property", role", is", block quote" '
    ],
    "AXAPI" : [
       [ property", AXRole", is", AXGroup" ',
       [ property", AXSubrole", is", <nil>" '
    ],
    "IAccessible2" : [
       [ property", role", is", IA2_ROLE_BLOCK_QUOTE" ',
       [ property", msaaRole", is", ROLE_SYSTEM_GROUPING" '
    ],
    "UIA" : [
       [ property", ControlType", is", Group" ',
       [ property", LocalizedControlType", is", blockquote" '
    ]
  }
);
</script>

Here is the result of this test: https://spectranaut.github.io/examples/wpt/role_blockquote_test.html

Right now the test has one subtest for each API, regardless of whether or not that API applies to platform the test is being run on. The test was run on linux, so only the first test -- the test of the linux API Atspi -- is useful. The other tests aren't actually run (because you can't run a mac accessibility API test on linux), which you can see by expanding and seeing No assertions run.

The design is that the test is sent to the backend, and the backend returns:

  • undefined if the API of the subtest is not valid for the current operating system.
  • An array with as many results as there are assertions for this particular subtest, the results are either the string "Pass" or a useful error message.

Why this awkward design that includes a few meaningless subtests each time we run the test?

tl;dr: Mostly, to be able to publish to wpt.fyi with no changes to wpt.fyi. wpt.fyi assumes that every test and every subtest is run on every browser/operating system pair, but our tests are operating system specific.

For context and to understand the alternative possible test designs, you need to know there is not a 1-1 mapping for APIs and operating systems:

  • macOS has one API: AX API
  • linux has one API: Atspi
  • Windows has two APIs: UIA and IAccessible2

So, other options for test design:

  1. Tests run one subtest for each relevant API: In this scenario, we need to know which operating system we are on in the front end javascript. Additionally, this will result in a different number of subtests across operating systems.
  2. Create tests that are operating system-specific. This would involve updating the harness to only run the tests appropriate for that operating system.
  3. Test contains only one subtest. In this case, within that single subtest, all assertions are run for all APIs relevant to the operating system.

For either (1) and (2), we could have to update wpt.fyi

  • For (1): For each test, you would only see results for one operating system -- so the irrelevant OSes won't have any data in the column
  • For (2): For each test file, there would be different subtests (including a different number of subtests) depending on the OS.
    • current WPT subtest view
    • The change would be similar to above -- in the subtest view, there are "boxes" with "n/a". In the test view, the count of total subtests would be very across columns.

For (3): Unfortunately, this direction is pretty much a no-go because Windows has two accessibility APIs, and a browser might implement one or the other (or none). We need different tests or different subtests for these different APIs.

@spectranaut spectranaut changed the title Experiment with test design Experiment with test design, using testable statements from manual core-aam tests Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant