Pipelines and Smoketests

Where there’s smoke, there’s fire. We smoketest a new build for a quickie reassurance that new fixes basically worked, and did not introduce new side-effect errors. Failing a smoketest tells us the code still has a fire to put out. The fix either didn’t put out the fire, or started a new one. But where SQA efforts sometimes fall short, is the inverse scenario: Where there’s no smoke, there’s still a fire.

They fall short when there’s too much management pressure on the pipeline to move on to other units and modules and whatever else is clogging up the pipeline, to get to that magical moment known as “Launch.” So you retest areas of known errors that were addressed by a fix, and you smoketest basic functionality for obvious side-effect bugs, and move on. This is the CYF test approach (cross-your-fingers)—you hope there are no deeper or hidden side-effect bugs spinning off from the fix.

Depth of regression testing will always be “your call” on how far you take it. It varies from organization to organization, even from day to day in one project. Pressure on the SDLC pipeline will affect “your call” every time it comes around. This blog entry is just a friendly reminder: a code glitch can cost ten or twenty times more to fix after release than when it’s found early. It is a wise practice (dare I say “best practice”) to invest enough time and resources in regression testing to dig methodically deeper into “untouched” areas surrounding a fix. How much is enough? That’s your call.

Contact Form

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.