Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move issue severity work into main guidelines document #661

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
34 changes: 30 additions & 4 deletions guidelines/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@ <h4>Conditional tests</h4>
</aside>

</section>
<section>
<section>
<h4>Conventional tests</h4>
<p>Conventional tests evaluate the results within a particular context. The tests are still unconditional or conditional tests but the context dictates:
</p><ul><li>which unconditional or conditional tests are used, or</li>
Expand Down Expand Up @@ -440,12 +440,38 @@ <h4>Procedural tests</h4>

<p class="ednote">The requirements for what would be evaluated for procedural tests are to be determined.</p>
</section>

<section>
<section>
<h3>Technology specific testing</h3>
<p>Each outcome includes <a>methods</a> associated with different technologies. Each method contains <a>tests</a> and <a>techniques</a> for satisfying the outcome. The outcome is written so that testers can test the accessibility of new and emerging technologies that do not have related methods based solely on the outcome.</p>
</section>
</section>

<section>
<h3>Critical Issues</h3>

<div class="ednote">
<p>This section is exploratory.
<p>Severity rating could contribute towards scoring and prioritization. This is a potential way to replace how A/AA/AAA levels represented severity by incorporating a mechanism to evaluate severity as a part of testing.</p>
<p>Outstanding questions that need to be addressed include:</p>
<ol>
<li>What to do with non-critical issues?</li>
<li>How best to assign severity, particularly if testers have different ideas on what is critical?</li>
<li>How do we incorporate context/process/task? Is that part of scoping, or issue severity? Both are important to the end result.</li>
<li>If included, how will situations where issue severity depends on context be handled?</li>
<li>Can the matrix inform designation of functional categories? For example, the <a href="https://www.w3.org/WAI/GL/WCAG3/2021/outcomes/text-alternative-available">Text Alternative Available outcome</a>.</li>
<li>How will issue severity fit into levels? For example:<ul>
<li>"Bronze" could be an absence of any critical or high issues;</li>
<li>"Silver" could be an absence of any critical, high, or medium issues.</li>
</ul>
</li>
<li>How to account for cumulative issues becoming critical?</li>
<li>Would another approach be more effective, for example assigning critical issues after testing is complete based on task or type of task rather than by test.</li>
</ol>
</div>

<p>Tests will include critical issues. Each test will have a category of severity, so some tests will be flagged as causing a critical issue. Examples of critical issues in tests are at <a href="https://www.w3.org/WAI/GL/WCAG3/2022/methods/functional-images/#tests-button">Text Alternative Available</a> and <a href="https://www.w3.org/WAI/GL/WCAG3/2022/methods/text-equiv/#tests-button">Translates Speech And Non-Speech Audio</a>.</p>
</section>
</section>

</section>


Expand Down