Tuesday, May 7, 2013

Lower Your Quality Debt Ceiling


Recently Chris Sterling from Rally delivered a great talk "Managing Quality Debt in Practice" to the Seattle Software Test Group Meetup hosted by Dynacron Group in Kirkland. Though he made many points that resonated with me, one that I felt the strongest affinity with was the "No Defect" Mindset which basically means that you don't kick the "quality" can down the road and expect to fix it in the future.

All of us at one time (at least) in our careers have experienced the phenomena I call "creative defect management". You know, where you are reviewing the defects with the Business Owner, Project Manager and the developers and yesterday's Severity 1 defect magically becomes a Severity 3 because "... we are three days to the release date and we cannot fix this in time, we will patch it after the release..."

For every [dollar] of competitive advantage gained by cuttng quality, it costs $4 to restore it; and software is an organizational asset and decisions to cut quality must be made by  executive management and reflected in the financial statements. http://www.infoq.com/presenta2ons/agile_quality_canary_coalmine 
Ken Schwaber

How many of us have worked on applications that have tens or even hundreds of unresolved known issues that were logged over several previous releases. What I thought was a radical idea a few years back, abandon bug tracking tools, has become a major theme for me. In my view, a better approach is to write a "test" story when you find a bug, review it during backlog grooming and if it doesn't reach the level of being assigned to the next sprint then archive it because it is likely never to be fixed.  Perhaps you let it hang in the backlog for a a couple of sprints but if it is not a high priority to be assigned in a short period of time, then it likely will never be. 

Do you Fund Your DoD?


Establishing and funding your "Definition of Done" is critical to lowering your Quality Debt. On a recent client engagement we used the following DoD:
  • Accepted by Product Owner 
    • We do not work on stories that are not validated by the client
  • The code has been peer reviewed
  • Test Passes on all CI targets
    • OS
    • Browser
    • Locales & Languages
  • PMD, checkstyle, & find bugs successfully executed and issues resolved 
  • Project Management tool (e.g Rally) has been updated
  • Both positive & negative tests pass 
  • Every commit has relative and meaningful comment
    • Don't overload checkins 
  • All broken build issues are fixed, no new feature checkins until build is GREEN
  • All new classes, methods, and attributes have at least one usage
    • No "unnecessary / zombie code"
  • No "known" defects that have not been "resolved" either with remediation or a story in the backlog
This is not trivial, it requires effort and tenacity but by establishing, evolving and adhering to a DoD you will lower your Quality Debt in short order.

Wednesday, January 2, 2013

A Rose By Any Other Name


A Rose By Any Other Name

Recently I was involved in a LinkedIn discussion on the future of the SDET role. One participant stated:
Test automation (like spec) has made it so easy to write tests, that testing is really strong now. 90% of the "real" projects on GitHub have large test suites. Maybe this is just the startup scene in Seattle, but anyone using OSS is likely going to have a robust testing suite... And no dedicated testers or SDET.
 
While I applaud developers who actively practice TDD or who retroactively develop robust Unit tests, I do not believe this portends the end of the SDETs. Quite the contrary, I believe the more application development teams embrace these new tools and methodologies, the more prominent the SDET’s role becomes.

Apparently I’m not alone, in the book “How Google Tests Software” they do not have the title of SDET instead they have Software Engineer in Test (SET).
At Google, we have created roles in which some engineers are responsible for making other engineers more productive and more quality-minded. These engineers often identify themselves as testers, but their actual mission is one of productivity.  Whittaker, James A.; Arbon, Jason; Carollo, Jeff (2012-03-21). How Google Tests Software (Kindle Locations 551-552). Pearson Education (USA). Kindle Edition.

Again from “How Google Tests Software” the role of the SET:
SETs are partners in the SWE codebase, but are more concerned with increasing quality and test coverage than adding new features or increasing performance. SETs write code that allows SWEs to test their features.  Whittaker, James A.; Arbon, Jason; Carollo, Jeff (2012-03-21). How Google Tests Software (Kindle Locations 566-567). Pearson Education (USA). Kindle Edition.

Having been a (American) football player in my youth, I had always thought of QA & Test as the defensive team; preventing our opponents (bugs) from scoring (making it to production).  My analogy has morphed as I have become a strong proponent of Agile and BDD/TDD. I believe SDETs are the offensive line on the project team. The SDET’s charter is to “enable” the application team to achieve it’s shared objectives of quality and velocity.

True SDETs are well positioned because they can directly impact IT’s ROI. As IT departments re-invest in training, infrastructure, tools and processes, they would do well to take the offensive and invest in strong, talented SDETs.