Wednesday, December 3, 2008

What is it You are Testing?

Months ago we had a discussion between the Dev and QA teams as to what was going on with our project and make some plans going forward, strangely even though we talk and documentation is given for review a comment came up in the discussion that said "I don't know what QA is testing?"

To me and the Manager it was a WTH (what the hell??) moment.  We had never heard this before, and even when we discuss bugs and defect fixes with the Developers this never came up, neither did this when we would go over reproducing bugs.  To me, I was testing and they were seeing results, but maybe someone was just not seeing things the same way I was.  We got over that, went into a project and did a lot of work over the past few months, adding some tests, discussing a few and verifying many fixes.  Yet, from those earlier discussions a document was made that still had that comment in it, so now that we are between versions I am going back to this and starting to think out, what is it that someone was missing and we never picked up on.

One of the projects for the next release is to update our test harness, adding some wrappers to it so that the scripts are more integrated and a database to store results, yes I know we still record some results in Excel and if not for that - not much historical tracking.  Well, that and the output documents I have from my tests, that I save for each release and archive away in case we ever need to go through old data.  I am also taking time to go to the Developers and show them what we are doing, get some input and putting the question to them "what should we be testing?".

Getting people involved is a good way to eliminate any confusion, since a follow-up to that is going to be "what would you like to test with this?" and offer our Test Harness to those who really do not have one.  Some parts of our product don't have many Unit Tests and its a shame, or they have them and they are kept secret, that's a future task  I'm not going to be giving them a fish, I intend to give them a rod, boat, ocean of fish and pilot and let them see what they can catch.

Monday, October 20, 2008

STOMPing around with the kids

I started a program today with a 5th Grade Elementary class that basically uses Lego Mindstorms kits to introduce kids to Engineering concepts, this is through the STOMP Network.  We had a 4 hour training class over the summer, and received some kits to build robots with, I took it home but my 3 year old was not as patient with the kit as I had hoped; yeah I know, not realistic.  Still it was fun to watch him see the robot go around, and follow it some where it went and did a course through the house.

First day with this group was basically build a durable structure, in this case a chair, that can be dropped with a stuffed animal in it, I had talked about the class with someone else involved who said it was good to come in with a good and bad example.  I made a good chair, that holds up when dropped from knee height, and a chair that would break when it hit the floor, as expected most kids were pointing to the one that would break because they were familiar with those Legos.  Still it didn't stop their design process when the kids were told to make their own chair, in teams of 2's and 3's.  Funny how many different designs there were, and how many of them wanted to make wheel chairs, whether it was because there were wheels in the kits or they wanted to see something move as well I have no idea.

This continues for the next 8-10 weeks and looks like it will be fun.  Next week the kids should finish their chairs, then we move onto something else, eventually making a robotics car that they can program.  I have to get used to the programming interface again, thankfully I have time for that.

Friday, October 10, 2008

Who needs the Customer? We do.

I was recently discussing an upcoming project with the VP of our group, part of the discussion went to process (as I am the process wonk in my group) and inevitably it drifted to Scrum and Agile methods as they are something I have been talking about over the past year.  We do have a good methodology that works well for the development and testing we need to do, its not Agile and potentially could be, but would take a lot of restructuring that upper management is not yet convinced we need since we still deliver very stable software.  The next project has a PRD that was coming out, and the teams were reviewing the document I asked who was going to play the part of the Customer Advocate?  Who were our domain experts for the product?

The answer - QA.

I was a little dumbfounded, while I will admit we know the product and have a good idea of what a Customer is going to do with our software, were we really in tune with the day to day usage of the product?  Were we too close to see some things?  Was Development equally as capable since they do some design, which even our lab manager questions and he is representative of the Customer, but the response is the Customer wants it that way.  Which makes no sense to some of us, especially the lab manager as he is as close to a Customer as we get, but the response is sometimes that if we needed improvements or changes like we have mentioned they would come through support.  To date few have.  Still I kept coming back to the real question - is QA really the Customer?  Do we really have a good grasp of the Customer's needs and wants and know how they will use the product day to day?

I was not so sure.

Individually the QA Team has an understanding of how the product works, and what it should do, one or two of us have a wide grasp of the product while the rest know our areas really well and have some pockets of knowledge in others.  Its a complex product, but would a Customer really use all its features or just those that help them resolve their problems?  What were the problems we were trying to solve?  Those problems were stated and we knew them, and knew the areas of the product that solved those problems, the new PRD was more on topic with not only stating the problems but gave more detailed User Scenarios than we've had before.  Did that close the knowledge gap?  Not really, because we have a good grasp of the high level issues and stated problems, we know the solutions for them and how to test them - but something still nagged at me and it was historical.

Last year we had some discussions on how to improve things, we split into teams that discussed how to improve our process and product more - one item showed up on multiple lists - more meetings and discussions with Customers.  We don't often know who our Customers are, and in the two years I have been asking I have yet to get anything that shows what platforms our Customers run software on, the only time I know is when I get an issue escalated from a Customer.  Since "more meetings with Customers" kept showing up, were we really meeting the Customer's stated needs, or were we meeting the needs as filtered through the Product Managers who talked to the Customers about their needs?  Sort of like playing Operator, need A gets said to Product Manager A who then tells Need A to Product Manager B who is also hearing Need B from Product Manager C and its all filtered up into a document that states Need C, an amalgamation of Need A and Need B.  But does that make it a real example of the Customer Need or is there still a disconnect?  I'm of the opinion that regardless of how much a Product Manager discusses needs with the Customer there is a gap in either experience, knowledge or use that is not quite bridged.

When the same question came up later with a different group of people and there were discussions about the PRD and what we need, some said it was a good document while I and others said its nice but we need Customers to look at it.  Regardless of how much we know about the product and its use we often can't bridge the How, this is why people use the mantra of "eat your own dog food", if you cannot get the product to work in your environment for its intended use then no one else will.  Customer Advocates are good, and this is what I am trying to bring in for Agile because I think it will improve our product, we can have Customer Advocates at all levels and test or develop with that in mind, but sometimes you need someone outside all of that who will use the product and give you honest feedback.

In otherwords, a Customer.

Tuesday, October 7, 2008

Those Three Little Words (or is it four?)

It's something that I used to dread, after all the hype and the information on this that is out there I feared one day I will come into work and hear the words "we're going Agile".  If you wish to dispute my counting, feel free.  It's my blog and I count "we're" as one word, so get over it.

While expecting this to come out at some point, my fear was recently fired up again when I got a notice about a company reorganization (I think we have one planned yearly), and in it were the words - "we're going to be using Agile processes".  Yup, we're going AGILE!  There was no date on it, or really any reason why, I have my suspicions but I will keep it to myself.  Last summer I gave a short presentation to my group, I do one a month and let others do it when they want or if they have something good, and in that I went over my experiences with Agile and Scrum.  Agile was done badly, Scrum was done well, though after a recent test conference I can see it being improved upon.

We discussed the upcoming change at our group meeting soon after the notice came and generated two important questions we thought would be good to answer before proceeding:
  1. What is the problem we are trying to solve?
  2. How will Agile processes solve the problem?
Without knowing the answers to those two going Agile means little, you need a plan and you need a purpose, there is no way adding a process over what exists is going to immediately solve every known and unknown problem.  To me there are two main reasons that companies consider going Agile:
  1. They have no process and saying you are going Agile allows you to say you are just in a phase of "adaptation"
  2. The process that is in place has so many exceptions that its not followed, this is the second path to "adaptation"
I am not saying everyone is in that state, but some companies that are, and want to make things look good consider Agile as a blanket they can pull over everything and hide what should probably have more light shined upon it.  Take care when the company goes Agile, more than likely its not for the right reasons, and won't be done in a way to take advantage of its strengths either.

Friday, September 19, 2008

Ahoy Matey!

Since Talk Like A Pirate Day (TLAPD) seems to be recognized by a lot of tech companies I figured I would add some of my own thoughts on the matter.

Pirates have that certain nautical attitude that basically says "don't mess with me".  Or it may even be that certain aptitude to make things happen, or just do what they want, damn the consequences, Piratitude if you go by the original creators of TLAPD. That is an attitude that can help in testing as well, as others have noted specifically Phil Kirkham there is such a thing as Pirate Testing, take what you want and steal the rest, that is a common trait that I have seen in testers and I think its a good one.

First the attitude, when your job revolves around testing and the basic premise of your job is to find faults in others work (mind you this it not my total thinking on the matter just a realistic view on part of it) then you need to have a sense of humor about it.  Who has a better sense of humor about life than pirates?  When your time is spent wenching, drinking and fighting you better be able to find enjoyment in anything cause when your time comes that's it, if you haven't enjoyed the ride on the way there then you are missing out.  Being able to come into work with a smile on your face, and a skip in your walk, every day or even when facing down a Developer for a specific bug, having a devil-may-care attitude is sometimes your best friend.  Giving difficult news is tough enough, but when you can come in with an "Arrrrr Matey" and a cutlass at your side, its amazing how the levity and sharp blade can turn things around to your advantage.  Pirates have it all, and they never let you forget it.

Testers are like pirates in another way as well, as the Beastie Boys once said, "What do you call Pirate's Trease?  BOOTY!!".  What's Test Booty?  Well its not that hot chick/guy who just got hired out of college.  No.  It's the treasure and self satisfaction you can get from a job well done, cause after all treasure is just a pirate's way of saying "we took that ship unawares with its cannon locked up and its sails limp".  Test Booty can be anything from writing a script that catches bugs, to that program you need to write which once you get it working and see it doing its job are able to sit back and say "my job just got 10% easier".  You can shoot for more, but lets keep those goals realistic, we are pirates after all, men in the real world, not philosophers who look towards some Eternal Truth.  Unless you can find it at the bottom of a cup of grog.

Yes, Testers are like Pirates, because we go our own way, earn our booty and still at the end of the day can enjoy a nice cup of grog.

Wednesday, September 3, 2008

Windows IPv6 Socket Clients

Just got through building my very first windows IPv6 socket client and server, I am using them for validating that the application I am testing, can detect the IPv6 sockets across machines.  Lots of fun stuff.

In trying to get the code to build on Windows 2003 Server, using Visual Studio, I had to add the following into the include directives to get them fixed.  Since it took me some searching to get this done let me go over some of this.  My code is as follows (and yes its probably not good form, but like I said, this is my first Windows app, I've taken 1 C++ course and read a couple of books and looked at some example code...I'm a total newbie on this).

First the ifdef, I needed to define WIN32 in the project properties under C/C++/PreProcessor Directives

#ifndef WIN32
   
 #else

This pragma was to get rid of a Warning due to a conversion from size_t to int problem where I was holding the result of the bind operation in an int in order to be able to tell what the status was later on.
#pragma warning(disable:4267)
There were problems due to linker errors which are due to ws2_32.lib and iphlpapi.lib not being found on the path, I also needed to add in the libraries to the Visual Studio Platform SDK, and I even placed in the latest Microsoft SDK on my local box, then in the project options under Linker/General/Additional Libraries added the paths to the two SDK's.  I'm still not sure I need both of these or not, but they are there just in case.
#pragma comment(lib, "IPHlpApi")
#pragma comment(lib,"ws2_32")


Because I am trying to use IPv6 operations in these apps, I need the updated winsock library, not the older one that only supports IPv4.
#include
End the if directive.
#endif
A couple of other items I had to add:
  • When trying to build this for IPv6 I kept getting warnings about sprintf being insecure, and deprecated, this was removed by adding _CRT_SECURE_NO_WARNINGS in the PreProcessor directives.
  • I needed the pragma lines because the Linker was having problems with the socket operations and would give me errors regarding LNK2019: unresolved external symbol
That's it for now...I'm sure I'll be getting back to this, I still need to do work on the code, but I definitely need more practice.

Wednesday, August 27, 2008

Let the crowd in...

I was reading James Whittaker's posts on  Crowd Sourcing at http://blogs.msdn.com/james_whittaker/archive/2008/08/20/the-future-of-software-testing-part-1.aspx, where he sees the next big leap to be one where everyone who wants to can be a part of the Testing Solution.  While I'm not quite sure I agree with it, and some of the quotes reflect my own opinion, I can see where it can be of benefit as an additional avenue of allowing people time and a chance to test software and submit their issues to be fixed.  This is a good thing, most people, as he says, do find problems not always found in a test cycle either because of contraints, or some quirk of the User Environment that is not mimiced in the lab; getting those issues in early is a good thing.  However, I am a realist, and while its nice to get many of these bugs in I personally don't see where the value is going to be if those bugs are not being resolved but end up in the bug bucket waiting to be looked at.  Though I am getting ahead of myself, there were a couple of points in this I thought was interesting.

The Cloud.  I'm still unclear on what this is, or whether its just an anticipation to what Cloud Computing is going to be, but I'm not expecting people to be sharing configurations and environments across the Net.  In a QA lab you can have all kinds of virtual environments, but they are not personalized, they tend to be representable of what Customers have, either by being proactive and knowing what Customers use, or by being reactive and adding in software or configurations that became known trouble spots.  I'll share a specific browser configuration and plug-in configuration with someone, but I'll be damned if I am going to sit there and share someone's Hello Kitty theme with kitty cat icons made up of the heads of someone's little babies.  I'm sure that works fine for some people, including some of my co-workers (not the Hello Kitty theme, but the cat picture backgrounds) but its not something I'd expect to see in my lab nor would I expect it to have issues with commercial software.

Bug Reporting.  As I said, just because Jim from South Carolina found a particular GUI issue with the tool bar, and it was confirmed with Vijay from Pune as well as Klaus from Dresden, doesn't mean that bug is going to be fixed.  I'm all for finding as many bugs as possible before I release, but honestly, do I need 100+ minor defects or 250 Enhancement Requests sitting in my bug queue because the Crowd found them, and the serious issues they had were already reported and entered?  I don't see what this gains me, other than more defects for triage that may or may not ever get fixed, but from a business standpoint I can say that we found them with the Crowd!  Serious defects should hopefully be caught prior to the release to the Crowd, or maybe specific configurations can be found, and I am all for getting as many of those as possible, but when I look at the numbers of Critical and high priority defects being added in, there are very few found later on; while I'd like those fixed is there any guarantee they will be?  I'm still waiting to see the results on that.

Isn't this really Beta?  I can see this point, and I partially agree with it.  If you are working on an open source project you expect updates that may or may not be very well tested, with a Beta you are taking this knowing that the software may crash (and badly) in your environment because that is what it is.  So what is this?  Beta?  Well, I'd say not really since it seems like the releases are still very early in the cycle, unless the company has a long timeframe between a release build and eventually getting it out.  So where does the testing stop?  Does the release mean everything found is put on hold and we now wait for the next release, especially since Customers now have a copy we can be getting reports from them.  I'm waiting to see more on this, but I am trying to keep an open mind.

Talent Pool.  Who are the people who are joining?  Are these people new to the field and they want more experience?  Are they bored?  What is their experience?  As someone who has trained people off and on I can tell you that a bug report from someone who is just learning compared to someone who has been doing this for a long time is very different.  I'm not sure who is signing up here, the money doesn't seem like much, and maybe I am not their talent pool, but when I get out of work the last thing I want to be doing is testing some other kind of software, heck I don't like to practice programming much outside of work.  I'm not the target audience for the community, but I'm curious as to who is.

Unlike Mr. Whittaker I don't know if this is the next logical step, but I like to take a long view and a higher view of testing as a whole, some things get tried, some change and some just end when its seen they don't lead anywhere.  I don't know where uTest and the Crowdsourcing is going to go, but I am curious to watch and see.

Friday, August 22, 2008

IO Stress, Part 2!

After the Development team came back from the latest microsoft PlugFest there was a new version of the IOStress tool, which has a better structure than the last one without all the useless command scripts in the root directory.  It seems to run better, and has picked up on configuration I had set for the previous one, as I was able to just put the directory down and run it with fewer errors than I had seen in the last version.  Running with the Verifier and our Driver still causes a crash, but that is expected for the way the Assertions were being used in 2008, supposedly in windows 2003 this was more forgiving, but 2008 doesn't like it.  I've crashed the 2008 box 3 times since installing the new version, go me!

I need to get a listing of the tests, and what they do, but for now I have a nice basic structure that runs on our driver and doesn't crash the box, always a good thing.  The one thing I don't like is the reboots the IOStress program does when setting up certain tests and when completing them - since I run this in a remote desktop window and do other things - the test goes away when I don't notice that the remote desktop is gone.  Part of the problem with the reboot is that the test results window disappears quickly after the reboot, I may have to look at getting the mail piece working so I can get the results in email, they may be formatted better.

Undocumented and unofficial tools are so much fun to work with....right.

Friday, August 15, 2008

Setting up IO Server Stress Tests for Windows

In my current position I test a filter driver, basically a driver placed into the stack on multiple platforms that gathers events on the system and passes them up to a Java agent to send to a central server for review.  Some events are filtered out, and some are not, but default and this is done to keep the messaging down to a level where the driver is not stressed out, but sometimes its good to know how the driver operates under stress.  There are a few scripts we use to load the system with events of all kinds, but windows provides a suite of tests in something it calls IOStress, and is given out at Plugfest every year.  I have spent some time setting it up for testing in my environment and this is what I have had to do so far to get it to work:
  • The IO Stress program needs a net share set up for ntiosrv that points to the iostress folder and supporting files
  • Copy the iostress folder from \Software\Microsoft\iostress to a local drive (say D:\) and rename the io.stress60 (plug-iosrv) folder to iostress
  • Create the share for ntiosrv
net share ntiosrv=d:\iostress /GRANT:Administrator,FULL /REMARK:"IOStress Share"
  • This will set up a share to the D:\iostress directory, change to that location and run - iostress.cmd /ignoredebugger
  • The I/O Stress program main window may be minimized, select it
  • In the Registration and Verifier screen enter the driver to be tested
    • Select Low Resources Simulation
    • Select I/O Verification Level 1
  • In the Registration and Verifier screen, if run with No Debug, there will be a note regarding No Debugger Selected, this is due to the Email and Contact information being empty. Enter values, any will do, into the three fields.
  • Select the drives needed to be tested, all available drives on the machine will appear in the Run Information tab, on this page as well the Test Time can be entered in the Specify Number of hours to run field.
  • The Stress Tests tab allows selection of specific tests, a slight overview of the tests is included with the zip file that contains the tests, it gives some information on the basic tests.
  • If the system reboots you must log in to the machine, the IO Stress kit likes the Administrator account to have a blank password, but that is insecure
Still, even with all this I have encountered a few problems when running the tests:
  • There is a complaint about a registry value not discovered, I have yet to find documentation on what that value is, but it seems to still run
  • Do not run with the Verifier, this seems to cause crashes, possibly because the IOStress program runs with the driver in the Verifier and having them in twice causes conflicts
  • When setting up the environment it claims some network drives are not found, I am not sure if its the ntiosrv share or not, I can never seem to find it in the net view list, but without the share being available the entire test suite will not even start.
I don't mind Windows tests, and undocumented Microsoft tools are such a joy, but it does some checking and has allowed me to verify some issues and find new ones especially on Windows 2008 which is a new platform for my group and we don't have a complete set of tools for that platform yet.  If I get these issues resolved, I'll note the fixes.

Tuesday, August 5, 2008

Command Windows Where You Want Them

For most Windows platforms I have used a registry hack to add a Command Window option to the right click command window within Windows Explorer  This is on the Microsoft Site, as well as available though the PowerToys add-on.

I just discovered that Windows 2008 has this embedded.  To get a Command Window in any directory right click the folder name while holding down the shift key, this will create an option to Open a Command Window within the directory.  This should work for Vista and Windows 2008, so if you need DOS windows like I do, this is how to do it within W2K8.

Wednesday, July 16, 2008

Perl Packagers

A few months ago I started work on a Perl version of a protocol test app, the original one was written in C and while we have the source code my network C was never very good, but I can handle Perl and protocols so I changed it.  Basically there are two scripts, one that sets up a dummyServer (listener) and another that is a dummyClient (sender).  The sender connects to the listener, sends a few lines of data and disconnects.  Short and simple because all we are doing is checking that the connections are detected, and where the connections come from, because that is what the product does.  Now this worked fine for awhile, but now we are detecting connections over IPv6, and that necessitated something else, and since I couldn't work in C (yet) I did this in Perl and it was a good experience.  I got to learn alot more about networks and protocols, and how DNS will work on our lab network, which I had no idea was limited.

Now that the scripts are done I then discover our product likes to ignore a lot of Perl scripts, good for testing but not good in this case, so rather than try to hack the product to make it recognize these scripts I am looking at changing the scipts into executables.  This has been more fun.  Since initially my work is done on Windows, so I can use SlickEdit, I tried buildind the various Perl Package Utilities on my Windows 2003 machine, what I encountered is noted here.

Perl2EXE, while nice, does not give me what I want, and sadly also does not install properly on my machine.  Giving me all kinds of weird Mac::InternetConfiguration errors, that seemingly a lot of people get, but somehow get past by installing older versions of certain Perl libraries or somehow ignoring them.  I can't seem to ignore them, and honestly, if I need to go through these weird kinds of hoops to get this working on a Windows machine, I'd rather do it on Unix.

PAR is a good alternative, but I want these to be executables not PAR files that I need to run individually, so while it was fine many versions ago with the recent split to have PAR and the Perl Packager (pp) in different distributions I have to install it and the PAR:Dist that gets me the Perl Packager I need.  On Windows I get loader errors with the PAR::Dist, which many have gotten but somehow gotten past awhile ago, there are some sites that list instructions on how to install the Packager, but some are woefully out of date or had steps which did not work for me.

pp - the Perl Packager is the one I am in the midst of using, but as I am now in the middle of adding this to one of my Unix machines (I have AIX, HPUS, Linux and Solaris available) once I get it working then I will know how things are going.

Perl is always fun, and I am always getting into something new.  Never a dull moment.

Friday, June 13, 2008

What do I want to be?

After working in various positions and companies I can pretty much say I have done all the pathways to progress I ever considered at one time or another.  starting from single person to Manager of QA, building up the groups overtime, or as individual contributer working on setting up processes, expanding existing testing or even automation I have done all the different aspects of what I have encountered.  Not that this is all of them but in many posts through the SQA Forums, most people ask about progressing towards Automation or Management or stay where they are.  Well here is what I have found.

As a Manager you will get away from the day to day testing, unless you are able to finagle some testing work into your schedule, and I always tried to do so for various reasons but its a good thing to do.  Most importantly to keep your skills up, sure management skills are important but within QA its good to also be technically proficient, because you will deal with Developers at times its good to have the technical backing to either keep down a snowjob or to back up what you are saying when you just know something that you are being told will work, just won't.  It's also good professionally, because you may end up leaving your job at some point, and if like me you enjoy the environment of start-ups, you will need to begin again and test so don't let the skills lack!  I know someone who used to be an IT Manager and worked his way up to there, but then spent so much time managing that when he ended up looking for another job he was so far out of the current environment technically he felt he needed to take a lower level job to catch up.  I say avoid that at all costs!

If you want to do Automation, then make sure you enjoy programming, know how to do it and want to make it your life.  When every day is either spent debugging a test program so that it will work correctly, because the interfaces you want to test have changed in subtle ways, and are not well documented, then you have to enjoy that sort of life style.  Knowing how to generate good frameworks that are well documented, easy for others to use and most importantly give clear and concise results is important.  When you are the toolsmith, your goal should not be just to make any tool, but make one that is easy for others to use and implement in various ways, so you can move on to making the next and greatest tool, don't just think you can make something then let it go.  See that its doing what you thought it was.  In the case you leave one day, don't leave a bunch of tools lying around that not many people know how to use, and can't figure out because the code is not commented well because "it was only a test tool".

My preferred area is a nice middle ground, at least for now, I have done managing, and some automation work (makes me realize that I don't really like getting into coding too deep, I'll never be a hacked I know it but I can get by), but what I like is being the person that gets a project and runs with it.  With my skills as a manager I can lead a project from the QA side, knowing how to diplomatically work with people, because I learned lots from some good HR people in the past, and I know enough about programming to be dangerous and take the time to work out the issues I find.  Some developers love that, especially when you are eager to learn, and I find that is the most important thing for me, I like to keep learning and working on new stuff as well as learn more about what I am working on.  There is so much out there, its almost inexhaustable, but its all good.

Monday, May 19, 2008

Nightly Builds - Just Good Sense

Occasionally in some discussions on Agile the topic of Nightly Builds comes up, and while I will agree that its something that should be done during a sprint, and while its mentioned in a couple of newer editions on the methodology I just see it as something that needs to be done.  You don't need to be on a Scrum Team or on an Agile Sprint to see the value in a nightly build, if your code is building every night then it means people are doing their work and verifying that things work properly, nothing more special than that.  Methodologies are great to build a framework around, and often give high level management some way to say they are doing something that is hot and on topic, or allow them to utilize yet another buzzword.  Outside of that nightly builds are good for everyone.

The gains are immense, not only knowing that any day someone can check out the source tree and get the code to build in their environment but once you have taken the step of having a build happen then  you can go onto the next step of automated testing.  Whether its a suite of Unit Tests or an automated Smoke Test, unless code is building you are not taking that next step, adding those steps allows one to have a good deal of confidence against the code checked in.  Having tests run every night allows you to know that your code is good in the source tree not only because it builds but that its passing a set of tests deemed necessary for the code to always pass.  Confidence assured.

Of course if you don't have the coding to back this up, say check ins are only occurring every couple of days then this won't work for you.  In larger, or mature, products you have code being checked in all the time, and that's where the gain is.  You need to determine the best output for your team.

So before worrying about whether or not you are following a specific methodology get some basics in place first, Nightly Builds is one of the most important.

Monday, May 5, 2008

Software is NOT Manufacturing

I've seen it alot whenever discussions on methodologies come up, especially the ISO and CMMI ones, but it usually comes down to the whole mechanism of why software is different than making widgets.  Thinking about this off and on, I came to the conclusion for myself, that there are some major differences here and it all comes down to the fact that widgets ARE different than software.  Beyond the physical there are some major differences between generating a product that people use, utilize or somehow adapt to their life and software which can also have the same implications is very different.  I came up with what I consider 3 major points, makes it easy because no one really has a top 3 and a half reasons list, in which they are very different and thats design, implementation and release.  Yes, they are similar but that's about where it all ends to me.  So let's take something amorphous called "software" or "the programme" and compare it to "Plastic Shovel Red No.2".

Mind you this is all my opinion, if you feel different go ahead and comment or stop reading now.

Design.  When you create software there are lots of meetings and specifications and documents that tell you how it is going to be and what it is going to be, unless its a personal or open source project (or even Agile!) where this may change.  Input is gathered, perhaps some market research, after all you want to know what it is you are building before you go ahead and do it.  Someone has to know before starting to even think about it that the particular software is going to be useful, or will be, in some niche that will create an entry point for its particular functionality - if its not going to be useful no one will want it or probably not want to work on it either.  So that need is part of the design and helps shape what the software will be, there is an idea and a goal to reach then, so the work on the design and any associated research is all about what will help the software become that need.  Now the Plastic Shovel is fairly simple, it has some uses (more if you consider the imagination of the typical 4 year old) that are known ahead of time and a shape that needs to be made.  It's considered from multiple angles and multiple measurements (how long should it be?  how wide?  how thick?  when we make the handle should it have an O shape at the top or another shape?) all of this is considered at the beginning.  Once run through a couple of times, perhaps a meeting or two just as might be had for the programme, a decision is made that this is the way we want it.  A prototype is made, for either one, and that is then presented and demo'ed to someone or a group and input is solicited.  Now this is the important part.  Once that input is given software can go through more rounds of feedback and use while Plastic Shovel is decided upon and then you know what....decision made and go make it!

This is what I call the Feedback Divergence.  For software you can continually elcit feedback, because its complex and evolves, almost as if its a living thing made of many living things and what you end up with is often not what you started with.  A plastic shovel, on the other hand, once decided upon has its specs sent to the factory floor with orders to make X numbers of them, software basically has one thing made - the source code.  Sure someone can come back and say that Plastic Shovel was horrible and broke when too much sand was put on it, but that just means we start the process again with Plastic Shovel and make a completely different version of it for the factory to press out later.  That feedback does not go back into the design and create a different one, because its already been made!  Software, because it can be built at any point of the process can elicit input at any time and be made different at that time.

Implementation.  Part of putting the design down on the factory floor for the Plastic Shovel is that a mold is made, and machines are set for creating thousands of cheap Plastic Shovel Red No.2.   Once that is done the Shovel is committed (now I know that's an Agile term but its also true) the Shovel will now be made by the thousands.  It's now being implemented by the workers on the floor whose only job at this point is to make thousands of nice, bright plastic shovels in Red No.2.  Not Red No.3 because little Suzie wants that color to go with her polka dotted suit, or Green No.5 because Jimmy thinks its cool, its all Red No.2.  If you want another color you are buying another product because once that shovel is stamped out on the factory floor its done, been implemented as designed and is not going to be personalized at the factory at all.  Not in the plan.  Software can do that, because its able to be adjusted because its not a physical object.

This is what I call the Hand Divergence.  If I can hold it in my hand there are a limited number of mechanisms by which this object can be adjusted, personalized and or changed.  With a significantly smaller number of those changes actually provided by the manufacturer.  Once its done its done.  Software in some essences is never done, because you are making a new version off an old set of source and code which together comprises the whole, and because its malleable you can infinitely change it.  You can't do that with a plastic shovel, at some point you will come to an end of possibilities to change it, software does not work that way.

Release.  When software is "done" by a project sense its given to the Customer, perhaps on a CD or a download, and there is probably a release party somewhere with beer, food and hula girls.  Well not everyone gets the hula girls, heck some people don't even get the beer.  The point is, you can take your checklist and look at it versus your software and say you have X% functionality done, which will be acceptable to the Customer at this time, with other improvements in queue, or soon to be in queue once they install the programme and use it.  But is it really done?  Not at all.  You've hit a milestone where you can say that the functionality that was originally designed for was satisfied to a degree, the closer to 100% you get the better off your project was managed.  Now that plastic shovel, once its off the factory floor its either on its way to some discount retailer near you to get a price tag, or its going to be bundled with the "Super Special Summer Sand Spectacular" where it takes its place in a bucket with a bunch of other cheaply made plastic items that are all COMPLETE!

I have no divergence with this one, unless you want to call it the Profit Divergence.  Part of what makes the Plastic Shovel Red No.2 special is that once it was sent to the factory floor we knew exactly how many needed to be made and sold to generate a profit for the company.  For software we are either paid for the work up front, where we probably went over budget because of circumstances beyond our control, or it will be sold and people will go out and driver desire for the programme so we can continually make money off it and get people in line for the improvements that come later.  At this point the story for Plastic Shovel Red No.2 is done, it may go back for a redesign but maybe not, software is just beginning and still has a long road ahead.

So there you go, my rationale as to why software is not manufacturing.

Wednesday, April 23, 2008

A Developer went Ka-Choo

(My apologies to Dr. Seuss)
You may not believe it, but here's how it happened.
One fine spring day....a Developer sneezed and went KA-CHOO!!
Because of that sneeze, a task dropped.
Because that task dropped, a Manager got assigned.
Because that Manager got assigned, he had to delegate.
Because he needed to delegate, and Architect got work.
Because that Achitect got work, some brainstorming happened.
Because of the brainstorming, some fingers got pointed.
Because of the fingerpointing, around that table
     Someone got more work than they were able.
Because of that work more talking was done.
Because there was talking, a document got made.
Because a document got made people had to read it.
Because of that reading comments were made.
Because comments were made, they were sent around.
Because they went around people got to read them,
     And when they read them more comments were made.
Because of the comments more people got involved.
Because of more people more comments were made.
     And so comments were sent around one more time.
Because there were opinions no review was made.
Because there was no review more comments got added.
Because of more comments and no review the document got stale.
Because it got stale it was ignored by QA
Because it was ignored more tempers were flared
Because tempers got flared more comments were made
Because more comments were made someone asked why
Because someone asked why they looked at the task
And because they looked at the task, it’s true I’m afraid – they
     Ran into a spectacle
And that started something they’ll never forget, and far as I know its going on yet.

Wednesday, March 12, 2008

Who is Testing the Tests?

When Developer's write code who looks at it?
Usually another Developer, at least for a review, and more importantly someone who is testing it.
When Tester's write a test who looks at it?
Uhm....I sometimes find that hard to answer.
Sure, we occasionally send out test plans and cases for review, but does it happen all the time?  No.  Are you sure everyone looks at it?  No.  Will comments come back?  Not always.  I do try to look at test plans that my group generates, I review it even if I think I will have no comments, though I can at least spot a grammar error or two if I look hard.  But that's not really a technical review is it?  That's kind of the point, where is the technical review?
I've seen this come up on occasion in blogs or posts, that someone writes a test tool and uses it, fixes it and makes sure that it does what its supposed to do then what?  Maybe suggest to others that they use it when checking certain code or functionality, but has it been as strenuously reviewed as code for Production?  I can honestly say for myself, that has not always been the case.  But in a way to catch up and practice what I preach, I am spending more time getting reviews and following up with people, having reviews of tools we use in house done by other people, including Developers.  Not only to make sure that things work right, and I can explain what is going on, but I can maybe find a better method of doing what I want by someone who spends more time coding than I do.
When was the last time you wrote out a spec for a tool you wrote?  If it was complex enough, that is...I can honestly say 1 for all the tools I have made.
Let's face it, we sometimes take shortcuts as in the test environment its not perceived to be as necessary to have strenuous reviews of what we use because what we are using is not going into Production.  What we are testing is!  So why not have the same standards for the tools we use and write?  Be just as critical as if you were going to buy such a tool from someone, and make sure you know what its doing and how.  The more you understand and can talk about the tool you are using, the more secure you can be in that the tool is doing what its doing.
And yes, I say reviews are for QA as well as Development, because in the end, its all code.

Wednesday, February 13, 2008

Clearing the Junk

I recently got a book from my sister-in-law, my wife's family comes from Taiwan and they are Buddhist so I occasionally end up going with them to the Buddhist temple and meet some of the monks there.  One I have had the pleasure or meeting, and discussing things with, is the Venerable Yifa, a very smart, funny person and prolific writer.  More can be found out about her here Yifa, I'm also pretty happy to know someone with their own wikipedia entry, pretty cool.

In reading Authenticity: Clearing the Junk, which focuses more on the junk cluttering our lives, I inevitably had to start working on something for test plans and started thinking - do I have junk here?  Is there stuff that is cluttering up my testing and making my work harder and more difficult than it needs to be?  While I can be prolific when I want to be, I can also feel that I don't want to waste time on something because its a point or discussion I feel can be done in a relatively short amount of time, my test plans are not padded in any way.  I don't write more than I have to, probably from advanced laziness where I don't want to have to update any more than I have to in the future if things change, but also because I know things may change in the future so why get set into something that may not be the same in 6 months?

So if my test plans don't have much junk in them what about those test cases?  I took a look and one of the benefits of using Excel is that you only have a limited amount of space to write in, so I am actually boxed in to be junk free there.  Not a bad way to leave myself.  So I decided to look at the Test Harness we use, and yes, lots of junk there.  Not all mine, but I certainly contributed at some point.  So I need to clean that up.

But why stop there?  If I look at my calendar I have a lot of meetings, some of them go far longer than they should, do I contribute junk there?  Probably.  I better start cleaning that up.  Of course sooner or later I am going to get back to looking at my life and seeing how much junk I have there as well, there have been cuts in many things over the past year just to provide a healthy home for my son.  No soda in the house anymore, I do miss that, not as many cookies and snacks, I eat way more fruit than I used to, and baking our own bread has been nice and gives the place a rustic, homey feel.

I guess I've been keeping an eye on this stuff for awhile now, so I guess I am going to a good place in life, and work.

Those Buddhists, pretty smart.

Thursday, January 31, 2008

Do you gather?

There is that old saying "a rolling stone gathers no moss", well yeah, because its like...uhm...moving and stuff.  If you sat around and let your bones and skills ossify you might find there is a bit of moss somewhere, or something else, of course it all depends on your location.  Because its nice to get people together and discuss something about your work, I started a monthly talk here about 9 months ago, and its worked out pretty well.

Why I did it was to give a place where everyone who was in QA, and in Development, could get together and hear a short presentation on a topic then have a roundtable to see if there were ideas of ways to use this in our own environment.  Development used to have a bi-weekly meeting to go over some new programming concept, cool widget or what have you, and while they were great to get in on they did not have much bearing on my day to day work.  Now I can actually come to a subject that is revolving around some facet of testing I do, make a presentation about it, instruct others who might derive some benefit and maybe talk about ways to improve it.  This has been nice, and we've had some good discussions and while I like to keep the presentations down to 20 minutes max, with the rest of the time for discussion, when the VP of our division came to the one on Agile and Scrum that lasted about an hour and a half as questions continually came up.  That one was great because it allowed us to look at ways to improve the process we had on a recently started major project, I am glad to say that we also adopted the information radiator concept and each of the three major sites have a weekly radiation dose of information on the state of the project.

Topics are open to all, and we've had some presentations from Documentation, on test tools like QuickTest, Virutalization on Linux, Debugging methods on our products and we have more planned for this year on Quality and API Testing.  It's great, allows people to brush up on presentation skills on a friendly audience, and gives some great opportunites for information dissemination.

Of course if you are ambitious and can do them offsite at your favorite restaurant or brewpub, more power to you!