Tuesday, May 26, 2009

When buying a tool...

...or pretty much anything to bring into your group there are some basic things to consider.  Now most of this can be found in many of the automation books out there, or in the QA Forums if you search for it, but I wanted to put down my own personal checklist.
  1. Consider who is going to use the tool - if its just you, and only ever you, then whatever fits your skillset is fine.  If other people are going to use this, not everyone will have the same experience, and some people like using detail-driven tools, others do not.  Make sure it fits the audience.
  2. Don't go for whizz-bang cool stuff - there is always a hot tool out there but it may not be a fit.  Picking something just because it looks cool, or is going to be resume fodder, is not the way to make a wise decision.  Go for a tool that works, not a tool that sounds neat.
  3. Make sure it fits the project - not every tool is simple and easy to use, in the midst of a project bringing in a tool that requires a lot of focus and start-up time, such as training and installation, is not wise.
  4. Plan for future needs - again, focusing on the here and now is fine to get you through the immediate needs, but at some point all you end up with is a bunch of tools that get you through a defined set of problems.  Thinking ahead, what will be other platforms that will be supported, other technologies coming in the next project, all of this should be taken under consideration.
  5. Do a demo - take a small subset of what will work and run a prototype, or a demo of the tool.  Open Source projects can be used at any time without issue, but if its a commercial product see if they will allow you to demo it against your tool yourself, many will give you 30 days or so to check things out.  If you build your own, do small prototypes that you can expand on or gain the knowledge you need to be able to do full builds.
  6. Check more than one tool - don't decide on the first one that works, do some due diligence.  This is the hard part, because it often when you find one that looks like its the best thing ever, and will do everything you want its good to take off.  Do your research and check every item on your list, sometimes you will come up with more than one, then its a matter of what will fit the team, environment and needs.  You can often end up with a better candidate this way.
  7. Talk to people - while we all have our own ideas of what the needs are, others who may need to use the tool will also have requirements.  Sometimes something at the back of their minds that is hard to use, or do or something that would make work easier, get all these items into a requirements list and document them.
A lot of these steps are important in one critical step, the rationale behind why the tool is needed and why it will work, you never want to be stopped by someone who says "Well why do we need this new thing if the old one works?".  You should have the information at your fingertips to say, "oh, that's because we can do A and not have to worry about B and C..."

If you can't talk about why the change will help, then you haven't done your work.

Tuesday, May 12, 2009

Rethinking the Test Harness

Recently I went through the exercise of updating the test harness I was using, and I was glad to see that I did it very similarly to Elfriede Dustin's recent book, I did get the book after I was done and read it as I was going through my coding but the important part that set me up for everything was the planning.  I already knew the test harness we had was inadequate for the tasks we were adding onto it, this was one of those organic harnesses that grew over time in the company, a lot of people used it and added what they needed to it.  Starting from seeds (scripts) it blossomed over time into a very complex organism with lost of scripts, branches (directories of new tests) that were linked into others and eventually its own library of functions to pass among the various scripts.  At one point I helped in added some object orientation to the scripts, which at this point were 90% in Perl, this helped us have more reuse and there was a lot of reuse, that is calling, of scripts that were previously written.

Then there was an adjustment in the projects and the test harness needed some adjustment, there were some options that were needed, such as more remote access into scripts being run, especially if they locked up a machine's resources.  Also logging was something I wanted for a long time, not just for the scripts that generated output, but also for any script run I was looking to having a database maintain some results so we can have a more permanent record, and review times when the scripts would on occasion fail without any real understanding of why.  So I slimmed down the directories, removed scripts that were no longer called and decided it was no longer a good idea to check in compiled binaries, I never liked that anyway.  Of course to do this required keeping the current harness working, while redoing everything we had, and we had a project lull so I stepped back and took a look at what we had.

What we had was a large organism that had grown through acquisition of tests by anyone who wanted to contribute, some tests added just to meet a specific condition a customer encountered long ago and was now tested by something else.  Also, there were scripts that were no longer called and basically wanted to run in directories of people who had not worked at the company for years.  So I stepped back further and said, if I was going to do this over again, what would I do?

So I started looking at other open source options, I like toolsmithing and the exploration process so this was a good fit for my needs, plus it fit the budget and there was already plenty of information on the internet from other people who had tried the tools.  So rather than redo everything myself I was going to use a tool to help us organize everything, with open source we had an option to meet the company budget for tools at the time ($0) and there was support in forums and on websites.  Next I had to look at what we needed to run it on, at the time we were supporting Windows and Unix (Linux, AIX, HPUX and Solaris) with multiple versions of each platform, so any tool needed to run on all those - a tall order, and eventually in a reorganization the platforms got pulled down to Windows and Linux and finally Windows.  Always a good idea to know what platforms you are supporting, and since it was not going to be just Windows forever, I kept all the platforms under consideration.  Once that was done I looked at the pros and cons of each tool I came across, made myself a matrix on each tool then wrote out what I wanted to have the test harness do, I could have written the harness document first but I wanted to get some influence first on what the capabilities of other tools can do.  That way I was not making a hard to reach design, knowing I wanted logging, a database, a UI for the database and reuse existing scripts was enough to do my research.

At the end I had a document specifying what I wanted the harness to do, a matrix of possible tools and having meetings with other people in the group, and in Development, I got together everyone who might possibly use the tool and solicited opinions.  In particular I asked the Architect of the product for opinions, having a high level thinker add even more to the document helped me generate a good tool, I did need to explain what it was that I designed, but it was good practice to be able to talk out what the harness was and should do.  In the end I think these steps helped me create a very solid foundation to create the harness on, and before the company folded I had the structure in place with a database, a UI to the database, some agents and clients running to do tests and check results and store them.  Very nice, sadly I won't see the design come to fruition but I think the experience was time well served so when I have to do this again I know all the steps I need to do.