- QA Engineer Tools
- Types of Tests
- Running QA Tests and Reporting Issues
- Naming/Labeling Issues
- A Word on Re-Testing (Fixed) Issues
- Accessibility Audits
- Prep for the Project Manager
The primary intended audience of this article is any QA Engineer at Cantilever. However, Project Managers may also find it useful since it provides an insight into what QA Engineers do for (and expect from) the rest of their team when performing a QA test.
QA Engineer Tools
The basic set of tools for every QA Engineer at Cantilever includes:
- Refer to the documentation specific to your project, found in the
- As a QA engineer, part of your responsibility is to ensure that this list is accurate and up-to-date.
- Note: The Site Documentation functions as your QA checklist and is your best friend as a tester. It tells you exactly what to look for and what to test.
- Sometimes, this may be comps on Figma, images, or even comparing different environments (IE. Staging versus Production)
- Cantilever primarily uses Asana. In rare cases, we may use GitLab or other repositories.
As described in
- Deadline for when the initial pass needs to be done by
- Estimate of hours budgeted to complete the initial QA pass
- Information on what kind of test to run (which browsers, if it’s a regression test, etc)
- Information about what you are testing and where you are testing it (which environment, etc)
Types of Tests
You will always run one of two types of browser tests in a QA pass:
Only a single browser needs to be used. This is generally up to the preference of the tester.
Test on all browsers listed under the project's Browser Support list in a project's Dev Notes (found in the ). If there is any uncertainty on what browsers to test, simply ask the DRI!
Optional Testing Requested:
This may be requested in either of these tests. Regression testing requests that the QA Engineer go back and make sure that no pre-existing things were broken by the newly released item.
Running QA Tests and Reporting Issues
- Use the Project Documentation to guide testing in the appropriate browsers
- Shortcode for Markup Template used in Git repositories (GitLab, etc.):
- Shortcode for Template used within any repository that does not need/support markup:
Important: This template is meant to be a resource, not a rule. Fill out the sections that are pertinent to a report and delete the rest if they are not needed.
Names should toe the line between being overly generic and overly specific. It is important that issues have names that are quick to read (without being too generic) and that they convey the specificity of the issue (without being too wordy). A good rule of thumb is as follows:
- Use the Project Documentation for noting the general area that is affected by the bug. As much as possible, refer to the Project Documentation for cues on what this area is called. List this area in brackets.
- In as few words as possible, list the specific item that has a bug or action that needs to happen.
- If the bug is browser-, device-, or layout-specific, add a dash and then list the specific thing.
- Universal bug (or action item): [Area] Issue/Bug/Action
- Type-specific bug
- [Area - Browser(s)] Issue
- [Area - Device] Issue
- [Area - Layout] Issue
- From Project Documentation & Issues Reported:
- From Project Documentation & Issues Reported:
A Word on Re-Testing (Fixed) Issues
It is always your judgment whether to re-test bug fixes as a functional test or as a full browser test. In most cases, a functional test is fine.
Most bugs are global and affect all browsers, therefore, a fix verified in one browser implies that it‘s fixed in all. Things that are aesthetic and not functional generally should be re-tested in a smattering of browsers though.
- If a link is broken, that definitely only has to be re-tested in one browser.
- If an icon is the wrong size, that’s worth looking at in a few browsers.
Browser-specific bugs should obviously be re-tested in the original browser.
Accessibility audits are different from but related to QA tests. The purpose of an audit is to find areas where a site or feature is not in compliance with WCAG standards (WCAG AA is typically considered the threshold for "ADA Compliant"). The QA engineer may make recommendations to improve or fix these issues.
There are two different kinds of Audits:
The purpose of a design audit is to find and address areas of non-compliance before they even make it to the development stage.
The purpose of a development audit is to find and address areas of non-compliance in how the site or features function.
Prep for the Project Manager
a walk-through on how to create an assignment for a QA Engineer, including a list of all materials needed.
a walk-thru on what to expect during the QA phase of a project
information on which QA Engineer works on a particular project and what to do during emergencies when the QA Engineer assigned to your team is unable to test a project when needed