Business Geeks: Automated Software Testing as Competitive Advantage

About This Blog

This site is for  entrepreneurs.  A full RSS feed to the articles is available.  Please subscribe so we know you're out there.  If you need more convincing, learn more about the site.

Community

Google+

And, you can find me on Google+

Connect on Twitter

Get Articles By Email

Your email:

Google

Blog Navigator

Navigate By : 
[Article Index]

Questions about startups?

If you have questions about startups, you can find me and a bunch of other startup fanatics on the free Q&A website:

Answers.OnStartups.com

Subscribe to Updates

 

30,000+ subscribers can't all be wrong.  Subscribe to the OnStartups.com RSS feed.

Follow me on LinkedIn

OnStartups

Current Articles | RSS Feed RSS Feed

Business Geeks: Automated Software Testing as Competitive Advantage

 


This blog’s audience can be simplistically divided into two types of people: 

1.  technology geeks (folks with a technology background, and more specifically a software development background) that have an interest in the business issues because they’ve founded or are thinking of kicking off a startup.

2.  business geeks (folks with a business/sales/strategy background) that have an interest in technology because they’ve founded a software startup.  For more on my thoughts on business geeks, read “Business Geek:  Not An Oxymoron

A number of my articles address one group or the other (like my “Presentation Tips for the Technically Gifted”.

This one looks at the value of automated software testing from the perspective of the business-side.  The reason for the focus is that most programmers I know and respect already understand the upside to automated testing and know way more than I do.  If this is you, feel free to stop reading.  I won’t be offended.

Business Thoughts On Automated Software Testing
 
Automated software testing is a large and relatively complex area that takes a while to understand.  But, let’s work with a simple definition:  It is the process of using computers (instead of humans) to run repeated tests to determine whether the software does what it is supposed to do.  It is important to note that most automated software testing still involves humans in the beginning (to design and develop the tests), but it’s the repeatability that makes it so powerful.  Once the tests are developed, the payback is continuous because the costs of running the tests are near zero.

In order to better illustrate my points, I’ll use Pyramid Digital Solutions (the first software company I started).  Pyramid ran successfully for 10+ years and was recently sold, but I like to use it as an example because I actually lived a lot of these lessons and I find it helpful to have a real-world example to talk about.
  1. Build Better Software:  This one is obvious, but is at the core of the value so needs to be said.  By building a library of automated tests, you are generally going to ship better software that at least, at a minimum works when used in certain, predictable, preconceived ways (the use cases that have been accounted for in the tests).  This is a good thing.

  1. Test Continuously:  As noted, once you have tests automated, there is very little cost to running the test.  As such, once you’ve made the investment in building automated test scripts, there is no good reason not to run them frequently (and lots of good reasons to do so).  In my prior startup, we eventually got to over 20,000+ test scripts that run for several hours.  We ran them every night.  Each night a process would fire off that would retrieve the latest source code the programmers had checked in, build our product (automated builds) and then run our test scripts.  Every morning, the results of the test scripts got emailed to management and the development team.  

 
  1. Cheaper To Fix Bugs:  Most software has bugs.  From the business perspective, the questions are:  which bugs do you know about, when do you “find” them and how much does it cost to fix them?  As it turns out, when you find them and how much it costs to fix them are highly correlated.  Lets take an example.  From my prior (real-world) example, lets say a programmer inadvertently makes a code change and checks it in.  The code has a bug.  In the old way we used to operate, it often be days, weeks or months before that big got caught (based on what part of the product the code was in, whether it was caught internally, or made it out into the “wild” to be found by customers).  The more time that elapsed from the when the code actually changed, to when the bug was actually found, the more expensive the bug became to find and fix.  We’re talking major (orders of magnitude) increase in costs.  Now, in the new world (where we had automated tests running every night), this bug may be caught by the automated test scripts.  If so, the very next morning we would know there was a problem and we could go fix it. The reason it was so must cheaper to find and fix the bug was because the “surface area” of change was so small.  A limited number of things got changed in the prior 24 hours (since the last test), so the bug could more easily be discovered.  I cannot emphasize enough how much money you can save by catching bugs within hours (instead of days) of the bug being introduced.

  1. Freedom To Change:  As software systems get bigger, it becomes harder and harder to make changes without breaking things.  Development teams do what they can to refactor the ugly bits of code as they can (and as time allows), but even then, a sufficiently large-scale codebase that has been around for a while will almost always have “corners” of it that nobody wants to touch (but are important).  The business risk to this situation is that you may find yourself in a situation where customers are asking for things or the market shifts in some way that causes the need for change (this should not come as a surprise).  If the programmers are fearful of changing core parts of the system for fear that they’ll break something, you’ve got a problem.  If you’ve got a large battery of automated test scripts, it frees the programmers to do lots of cool things.  They can refactor the code (the automated testing is a “safety net”), they can add/change features, etc. with a lot less loss of sleep.  What you will find, by investing in automated testing is that your organization actually moves faster than it did before.  You can respond to market change quicker, you roll out features quicker and you have a stronger  company.

  1. Clients Are Happier:  At Pyramid, we had quarterly meetings with our clients (and an annual conference where a bunch of clients got together).  At each of these, one of the key metrics we shared was how large our automated tests were.  This gave clients some comfort.  This comfort translated into a higher likelihood that they would install newer versions of the software (when the became available).  Since we were in the high-end, enterprise software space, this was a big deal.  If we could get 20% more of our customers to move to Version 5 of our software (instead of staying stuck on Version 4), we had advantage.  Less support costs and higher retention rates.


I like to think of technology strategy in the form of technology debt (take short cuts now, but payback comes later – with interest).  Read “Short-Cuts Are Not Free” if you’re curious about this.  Similar to financial debt, this is often necessary (as is technology debt) but it has a cost.  The reverse of this is Technology Investment (in the classic sense).  This too has an interest rate – the rate you get “paid” (i.e. ROI) on that investment.  I think investment in automated testing is one of the best interest rates you can find.  The payback period is a little longer, but it is worth it.  If you have competition (which you likely will), you will find that having a strong investment in automated testing will give you advantage.  You’ll add features quicker, fix bugs cheaper and ship better software.

Of course, as is always the case, situations vary.  Pyramid was in a different market than my current startup HubSpot – but I’m still passionate about automated testing.  Will continue to share experiences as I progress.

Posted by admin_onstartups.com admin_onstartups.com on Wed, Sep 13, 2006

COMMENTS

I totally agree with you on the merits of automated testing. In the late 1990s I was on a project that used Vector Software's Computer Aided Software Test (CAST) tools and they were very helpful. A good CAST tool is hard to come by, though, I have seen some for Java that are expensive and do such a poor job of testing (only passing in the number "7" for an integer parameter, for example, instead of working the parameter through its ranges). So, caveat emptor when buying a CAST tool to help you build an automated testing framework. Some will help a lot and some are a joke, and you can't tell which is which by the price. You have to invest the time to actually test the tools on your real code.

Good article!

posted on Wednesday, September 13, 2006 at 11:57 AM by Mike


Are you referring to nunit testing or something else in this article?

posted on Wednesday, September 13, 2006 at 1:15 PM by Stacy


Referring to both unit testing (with things like Junit, Nunit and others) and "external" testing -- which is much harder. Basically, anything that improves the chances of finding bugs in an automated fashion.

posted on Wednesday, September 13, 2006 at 1:30 PM by


Excellent article. Thank you for writing it.

Can you ballpark what the cost of developing the auto testing systems was on a percentage basis of overall development cost?

I would imagine it was in the low single digits?

Thanks again!

posted on Wednesday, September 13, 2006 at 3:18 PM by Cate


I believe automated testing (particularly unit testing [1]) does deliver a competitive advantage.

The caveat with unit testing is that your product architecture must facilitate the economical development of unit tests (i.e. MVC [2] and perhaps SOA [3] as well? ), and you have the discipline to write tests!

It can be difficult to see the value of writing unit tests early in development, particularly when your product has few features and functional/user acceptance testing [4] can be done in minutes.

However, unit testing pays for itself tenfold as your product matures (and subsequently gets more complex).

Scott
http://www.invoiceplace.com

[1] http://en.wikipedia.org/wiki/Unit_testing
[2] http://en.wikipedia.org/wiki/Model-view-controller
[3] http://en.wikipedia.org/wiki/Service-Oriented_Architecture
[4] http://en.wikipedia.org/wiki/User_acceptance_testing

posted on Wednesday, September 13, 2006 at 9:37 PM by Scott Carpenter


1. Support for the built-in quality that repetitive automated testing is refreshing to see in the context of a startup – some things need to be done right even when in a lean release early crunch. My own experience has yielded great benefits from automated unit test. GUI testing has been much less successful.

2. I agree automated user interface functional testing is hard! Here lies a Web 2.0 warning: So far web apps have been simpler to test because the GUIs were simpler and typically followed simpler request/response dialogs. The sophisticated browser-based interfaces now possible present much more complexity in terms of test combinations.

3. Regarding "technology debt (take short cuts now, but payback comes later – with interest)". This is a great way to think about it -- sacrificing automated unit testing then would be like taking a loan from the loan shark in the worst neighborhood!

/andrew

posted on Thursday, September 14, 2006 at 6:40 PM by Andrew Lavers


I posted a follow-up here:

You Can Pay Me Now, Or...
http://www.brianberliner.com/2006/09/14/you-can-pay-me-now-or/

posted on Thursday, September 14, 2006 at 9:17 PM by Brian Berliner


Your unsubscribe is broken, asshole. Can't you get this fixed?

posted on Saturday, September 16, 2006 at 1:19 PM by a b


b: Send an email to unsubscribe [at] onstartups.com and I'll make sure it gets taken care of.

Not sure if you're talking about subscribing to the comment notifications or the email list (am assuming comments, given your tone).

posted on Saturday, September 16, 2006 at 1:26 PM by


I couldn't agree more. In fact, my company builds a tool to assist in the automated scheduling, running and reporting of builds with tests. The only thing I would say is that you can (and should where possible) take point 2 even further: run the tests as frequently as possible. You should aim to get at least a fast subset of the tests to run in minutes rather than hours. Then run these tests on every new check in to the source repository. The advantage is as stated in point 3: but the feedback is even faster. You can track the exact change that caused the problem, and the developers find out ASAP. This also allows them to fix the problem before it affects other developers in their team. Those tests that can't be run quickly can be run on a separate, slower cycle (perhaps overnight as you did at Pyramid).

(Warning: shameless plug ahead) I wrote an article on the topic a little while back:

The Road to Build Enlightenment:
http://zutubi.com/products/pulse/articles/buildenlightenment/

Interested readers should definitely also refer to Martin Fowler's article:

Continuous Integration:
http://www.martinfowler.com/articles/continuousIntegration.html

posted on Tuesday, September 26, 2006 at 6:14 PM by Jason Sankey


Article is good, but as some has pointed out its very obvious. But even then, its a nice reading. Even with all the benefits of automation testing, I have seen projects struggling with automation whenevr GUI is involved. Handling GUI from vendor supplied tool, when GUI is changing ever now and then is very difficult. Automation is certainly inevitable for API, Web Services or any product with good command line interface. For GUI, I guess 80/20 rule should be followed. For API testing This page gives good information. But looks like site is still under construction.

posted on Monday, October 16, 2006 at 8:06 AM by Anand


First up - I represent a vendor - but not one the ones that instantly spring to mind when one is dismissing GUI testing tools.
Why - beacuse our GUI testing tools actually will work!!
Move the conntrols around the screen - no problems - we'll map them and then update the test script to match the new version of the appication.

Complex scripting language? no way .. far too labour intensive and one of the reasons why ther products are shelf ware.

What about creating automated tests when carrying out manual testing? - yep - only we do it.

Please contact me off-line at jamie.coles@origsoft.com. I hope I haven't offended or broken any of the rules on this forum, as I did state up front I was from a vendor.

Jamie

posted on Wednesday, December 13, 2006 at 10:59 AM by Jamie Coles


Hi,
Very good article, thank you! I'm trying to prove to our management the tests should move to the automation side, so your article will be a weapon, too. :)
Regarding (so much mentioned here in comments) GUI testing, there is a very good tool, both simply learned and powerful, we use it a lot: Macro Scheduler from http://mjtnet.com It can check a wide range of things, GUI included. It cannot go too much inside the components as WinRunner would, for example, but it covers pretty much. Not less important - it's light, doesn't record all the PC environment as WinRunner does when you record a new test, and it can also be compiled to light exe which can then be run at any other PC, at dev, QA, or customer's - Macro Scheduler doesn't have to be installed there (again, comparing to WinRunner, a huge advantage). Besides its own script possibilities, it includes VBScript, runs with your dlls, in short - can be used very widely.
And it's not a shameless plug in, because we're justtheir very happy customers. I've been working with this tool in several companies now, including my own, and I love it.
Olga.

posted on Tuesday, February 12, 2008 at 6:59 AM by Olga


Comments have been closed for this article.