Thank you to everyone who participated in the “Use the Windshield, Not the Mirror Predictive Metrics that Drive Successful Product Releases” webinar. The recording is now available if you weren’t able to attend or if you would like to watch it again.
Sharon Niemi, Practice Director of SQA, talks about how the right combination of predictive and reactive metrics can help you build a measurement portfolio that improves product quality and release consistency. You’ll learn how to build a measurement system that incorporates leading and lagging indicators to improve your team’s consistency in delivering quality products on time and within budget. Near the end, Jeff Amfahr, Director of Product management at Seapine Software, demonstrates how Seapine’s TestTrack solution for product development processes makes capturing and reporting on these metrics possible.
All risk control measures are not created equal. If you’re looking for expert insight into the most effective risk control measures for your medical device, the following video is a great place to start.
Dr. David Vogel of Intertech Engineering runs down his classification of software risk control measures, and provides specific examples on when each type of risk control is best used.
Inherently safe design—global design requirements or constraints that render the potential hazard or harm all but impossible.
Preventative controls—requirements and constraints that help to prevent a hazardous or harmful situation from materializing.
Corrective actions—”detect and correct” risk control measures take a corrective action when a hazardous situation is detected.
Mitigate—the severity of harm resulting from a hazard is reduced, but not necessarily eliminated.
Soft controls—labeling, training, and operator instructions
I’m currently taking a MOOC through Coursera called A Beginner’s Guide to Irrational Behavior, taught by renowned economist Dan Ariely at Duke University. It’s a course that fits into the area of behavioral economics, and it dovetails nicely into some of the concepts I’ve been developing about bias in testing. It builds on the biases I discuss in my previous posts, Kahneman and Thinking About Testing, and Why First Impressions Count.
One of the most amazing biases described by both Ariely and Kahneman is the anchoring bias. In an experiment, Kahneman asks subjects to spin a “wheel of fortune” that is designed to stop on one of two different numbers. He then asks these subjects how many countries are there on the African continent. The number they spun on the wheel of fortune greatly influenced their resulting guess. It turns out that we can become anchored by a random value prior to making other decisions that concern numbers.
How happy would your customers be if you could eliminate 70 percent of all product defects? How happy would your boss be if you could double the amount of projects that are delivered successfully? And how much more could your team get done if you eliminated unproductive rework?
(SPOILER: The answers are “very happy,” “very happy,” and “they might actually get to see the world outside of the office again.”)
Seapine Software, a leading provider of quality-centric product development solutions, is partnering with Software Quality Associates (SQA) to show how your team can dramatically improve productivity by adding predictive metrics to your measurement portfolio.
In organizations where rushing software out the door is the standard operating procedure, test managers must develop inventive ways to recruit and retain staff, find time to perform the essentials of testing, and ensure that important defects are addressed.
There was a time when the answer to this question was very different than I hope it would be today. In the 1970s, the theory of systems management purported to tell project managers that they shouldn’t depend upon the unique skills and dedication of individual people, as those people could leave the project. Instead, they should define work processes that depend upon a skill class, rather than a unique skill. That way they could hire skills within that class, rather than look for particular or specialized qualities in individual hires. Often management didn’t understand how to value or manage people’s skills, so if they didn’t fit into the required category, they were discounted and even avoided in working toward team goals.
With all of the hubbub around Yahoo’s announcement to “ban” telecommuting, I thought it might be a good time to highlight some recent customer feedback we’ve gotten on this issue. A new initiative I’ve been leading is engaging with our customers to talk about their corporate strategies and challenges, which we’re feeding into the Seapine roadmap to make sure we’re better aligned with where our customers are heading. Of course, we’re also looking at new and better places to take our customers, that they might not have considered or even knew they would benefit from. P.S. If you want to talk with me and the rest of the corporate strategy team, we’d love to chat for 20 minutes; email me!
At what point in the process of developing a new device does your team start formally managing requirements and risk?
In talking with medical device companies, the most common answer we hear is “we don’t start worrying about that until the product is actually under design control.” That kind of time frame works great for the engineering side of the business, but often leaves marketing and product management in the lurch. The issue is that product management and marketing do the bulk of work on the front-end understanding the market need, defining a concept, and building a business case for the new product, including:
Capturing voice of the customer (VOC) feedback
Determining market/user needs and researching competitive offerings
Thank you to everyone who participated in the Leveraging Traceability in your Risk Management Strategy webinar. The recording is now available if you weren’t able to attend or if you would like to watch it again.
In short, Bolton says that a defect is a demonstrated bug, a direct mismatch to requirements that can be objectively described and repeated. An issue, on the other hand, doesn’t rise to that level, but instead represents an inconsistency, question, or thought that the tester believes is worth writing down and investigating. He gives a number of examples of issues, and they provide a good explanation as to why they aren’t defects, but are something that a tester should follow up on.
All that makes a great deal of sense, and I think it’s a worthwhile distinction. Where I part company with this description is that Bolton says that, while an issue is noted in writing on the session sheet and investigated further, it isn’t entered into the defect tracking system. Only defects as defined above are automated and tracked.
I understand why he declines to enter issues into defect tracking; it represents a questionable effort for something that could be resolved with a single question.