What is ‘use case’ testing?

In a nutshell, use case testing is real-world testing. That is, it seeks to simulate—as closely as possible—a “real” user’s experience with various facets of a client’s product, whether it be their website, mobile app or device. For core scenarios integral to the use of the products, use case testing should be performed in addition to automatic and manual testing.

How is use case testing performed?

Use case testing is performed by users with disabilities using assistive technology, accessibility features, or other strategies that a person with that disability would commonly use.

How are use case tests scored?

The results of each use case is scored objectively. Level Access uses a one through five scoring system to rate individual use cases as well as an overall average score; five indicating no accessibility issues and one indicating severe problems that pose a barrier to access. While users can indicate efficiency and effectiveness in the use case notes, the intent of the score is to document the presence or absence of barriers and issues that would impact the user’s ability to access the service. The use case gets at what the impact to the user is, and how it might affect the user’s ability to carry out a given task. Code examples and solutions are not provided in the use case notes–a description of the challenges or lack thereof are documented by the user instead.

How do you select what to test?

Use cases are selected from core tasks that are integral to the use of a site or product. Each of these core scenarios can be broken down into more granular use cases as appropriate and to stay within budget. Error states and alternative paths should also be documented as part of the use case and tested during the use case.

Retail / e-Commerce

  • create a new account
  • log into the site or mobile app
  • search for and investigate various core products within their catalog
  • add products to the shopping cart
  • check out
  • view order status and track shipment

Financial Services

  • log into account to view balance
  • set up a new bill and pay it online
  • deposit a check using mobile app
  • complete an application for a personal loan
  • make an appointment to talk to a banker

Telecommunications

  • explore TV, Internet and phone options
  • customize those options
  • pay bills
  • schedule a service appointment

Travel & Hospitality

  • select destinations/accommodations
  • compare prices
  • purchase add-ons
  • pay for bookings
  • redeem loyalty program rewards

Education

  • apply for admission
  • register for courses
  • navigate online textbooks
  • view grades
  • create/complete assessments
  • interact within discussion boards

State & Local Government

  • register to vote
  • make an appointment at the Department of Motor Vehicles
  • pay a parking ticket online
  • request services or accommodations
  • view government meetings/minutes

Now that we have an idea of what different industries may wish to test using the use case methodology, the next question is:

Where does use case testing fit in?

A spider creating a web on a webpageLevel Access employs an end-to-end approach to accessibility testing, remediation, and policy. It is beyond the scope of this blog post to outline all of the firm’s offerings, but putting use case testing into context within the accessibility audit methodology can be helpful to understand where it fits.

Automated testing is just that: automated. A few examples of automated testing could include:

  • Spidering: Spidering technology can be used to “walk” through a site’s various web pages. As mistakes within the code related to accessibility are found, they are logged for further review.
  • In-page Testing: In this scenario, a script is inserted on the page and the page is automatically tested on load. Then, as the user interacts with the page, newly revealed content is tested automatically.
  • Behavior Driven Development (BDD) and Test-Driven Development (TDD): This can be used to automatically test certain core tasks and pathways through a site, but does not address all areas of a user’s experience.  For example, it can verify that certain accessibility information is present when a keystroke is pressed on a control or when a user action occurs, but it cannot describe the experience with a given assistive technology on the page holistically.
  • Providing specific URLs to test: This is more targeted than general “spidering” because only a specific list of web addresses are tested.

On the surface, the examples above may seem like the best possible ways to test for accessibility: automation keeps costs down, minimizes human error, can be performed extremely quickly and is almost infinitely scalable with the right tools. So, are there any drawbacks? Indeed, there are!

What are the drawbacks to automated testing?

It is quite possible to write code that, on the surface, meets accessibility standards but will be totally inaccessible or have very limited usability. Examples of this are alternative text and form labels. Random numbers and letters can be placed into alternative text tags, form fields and so on, and not be caught by today’s automated testing. While computer learning shows progress in this area, there remain some aspects of testing that require a human.  To truly have a complete view of the accessibility of your site or application, manual and use case (human) testing must be an integral part of your audit plan.

The human side of Level Access’s testing methodology comes in two categories: manual testing and—you guessed it—use case testing.

Manual testing is when a person reviews items that cannot be tested automatically or potential issues that were detected automatically.  Sometimes automated tools can flag false positives and a manual review is needed to determine if the issue really is an accessibility issue.  Manual testing may also involve using code inspection, using a tool, or testing using only the keyboard.

Another part of manual testing involves going through a site or application using one or more assistive technologies. These include, but aren’t limited to, voice dictation, switch controls, screen magnification/ high contrast and screen readers.

What is the difference between manual testing and use case testing?

Manual testing does not necessarily have to be performed by someone with a disability; manual testers will often not be native users of assistive technology. That is, someone manually testing using a screen reader may not have a visual disability, so they will be testing in a very “clinical” way, for lack of a better term. To clarify, this type of testing is not scenario-based. The manual tester will just “walk” through the page or screen (or “module” in Level Access terminology), noting which best practices were violated, where the violation was located and why the violation occurred. In short, manual testing may catch individual component issues but doesn’t provide a full view of the task.

Historically, use case testing is conducted by a native user of a particular assistive technology, i.e., someone who is deeply familiar with that technology and its application/function. So, when this technique is used, the human element is put back into the testing/validation process. For instance, a manual tester would note a violation of the lack of alternative text on a photograph of the latest skinny jeans. A use case tester would note that they would not buy the skinny jeans because they did not know exactly what the color pallet was. If other images were present, showing an entire outfit, for example, that also lacked alternative text, a non-visual user would not be able to receive this information, therefore decreasing the odds of buying additional accessories.

As you can see, use case testing can provide “color” to black-and-white manual testing, which brings us to our final question:

Is use case testing right for your situation?

Frankly, each client’s situation is and will be unique. As a matter of fact, clients who engage with Level Access more than once will have different needs each time. However, there are some guidelines you can use to help you determine if you should include use case testing as part of your engagement:

Accessibility journey:

  • How far along the path is your product?
  • Is this the initial release?
  • Has it been tested at all for accessibility and compliance?
  • Have remediations been done?

If you answered “no” to at least two of the above questions, then Level Access would most likely recommend including use case testing in the mix. This would provide project managers and less technical folks with real-world scenarios they could present to upper management, which could potentially influence funding to fix any issues which were discovered. Simply relying on technical violations may not have the same impact on those outside of the technical team. On the other hand, if this is an update to an already-established, well-tested product, use case testing may not be necessary, or perhaps only targeted use case testing, e.g., testing only new or overhauled features rather than a more end-to-end approach, may suffice.

Primary use: Is your site or application for use by the public? If so, your exposure to accessibility risk could be high. If your product is specifically used by a small subset of the population, use case testing may not need to be done outside; you could do some in-house testing if you employ persons with disabilities, and/or your technical QA team could do its own manual testing using assistive technologies.

Are you a potential legal target? In recent years, litigation has been brought against companies in the retail and financial sectors, as well as state and local government. If you are a player in any of these industries, use case testing could provide additional support for your case, should you be the subject of legal action.  Performing use case testing on the integral functions of your site can reduce the risk that users will run into issues on the core paths of your product.  Performing use case testing does not reduce your need for the other types of testing discussed in this post, however; Level Access still recommends that automatic and manual testing be prioritized and performed, as the presence of automatically detectable violations can also pose risk to your organization.