Positive and negative testing are like yin and yang: Sometimes the coding and systems behind a customer experience work the way they should—and sometimes they don’t. By running positive and negative tests, companies can prepare for both eventualities, ensuring high-quality offerings.
In positive testing, developers check to see if the system behind a web, mobile, or app experience properly uses valid data. A positive test fails if the system produces an error even after users have correctly input all necessary data or taken all the right steps. In negative testing, developers analyze how a system reacts when users enter invalid data. A negative test fails when a system doesn’t respond as developers intended in this (inevitable) situation.
Popular Use Case: Logging In
One example I love using to explain positive and negative testing is log-in functionality.
If a visitor inputs her username and password correctly and gains access to the membership features of a site or app, then the system passes a positive test. If a user can’t log in even though she’s entered her username and password correctly, the system fails a positive test. If a user incorrectly enters her username or password, a negative test is triggered. The system will pass this negative test if it loads the content that was coded to show up when this specific user error is committed.
Again, negative testing is about accounting for possible user errors. Companies often obsess about finding development faults on their side—but users are human, and humans make mistakes! Users can click the wrong link, hit the wrong keys, leave form fields unfilled, and use symbols that aren’t allowed. Companies have to make a system that does two things: lets users know they’ve entered invalid data or taken an invalid action, and tells them why this data or action is invalid. Red marks and short but detailed error notifications work well.
Popular Use Case: Tracking KPIs
Negative testing isn’t just for log-ins and form fills! It also applies to KPI analysis.
Pretend you’re on the marketing team for an online travel agency that has launched a campaign to drive users to buy trip add-ons. One of your main metrics is completed purchases that have at least one kind of add-on, e.g., better seating or luggage insurance. If a user makes a purchase with an add-on and this data is successfully collected, your company’s system passes a positive test. If the user does not purchase a trip add-on but the system records she did, your company has failed a positive test. Running a positive test in this situation ensures the customer experience works well if users do everything they’re supposed to do.
But what if users take invalid actions? (And again—this is more ‘when,’ not ‘if.’) A negative test will help make sure her experience isn’t ruined if she does. Say the user attempts to buy an add-on that’s not applicable to her trip—an upgrade to her ticket when she’s already flying first class, for example. If a poor-quality customer experience ensues—the page crashes mid-checkout, or her entire order is erased—your company fails a negative test. However, if the site loads a page telling her she’s attempted an invalid action (even better, explaining why it was invalid and suggesting other valid trip adds-on) it passes a negative test.
It’s important to use both positive and negative testing to make sure users get high-quality customer experiences. With positive tests checking valid data processes and negative tests covering invalid data, the two in tandem cover all your bases when it comes to web, mobile, and app development.