I've actually ended up doing this a couple of different ways over the years, and I don't guarantee any one of them will work for anyone else, and I am not exactly sure they worked for me all the time either. There was always more analysis we needed to do.
- Jump in there and do it. Oh yes, we jumped headfirst into making some nightly tests, anything was good so long as it tested SOMETHING. This was the worst way, and it was only done because the Engineering VP wanted something in place, mostly I think it was a way to tell the Board that we have automation or nightly testing in place. Has this topic ever been covered in an in-flight magazine? While it was a good exercise in getting QA involved with the Nightly Tests (we previously had none) and allowing us some time to do some scripting we actually got more out of the scripts used for our regular Test Phase. This is the worst way to do it, since you are basically just doing anything for the sake of doing something, I think we ended up calling this the Exploratory Nightlies; no plan, no review of technology, just seat of your pants Nightly Testing.
- Use what Dev had. As there were some tests that Developers had written already, in both Perl and a few other Unit Test frameworks we had a good setup. Tests were written, established, covered needed areas of code as these were always written to cover bug fixes; they had Unit Tests as well but they were shorter to run and the Nightly Tests ran longer so they became the place to put all longer running tests. This worked fine, as the reporting mechanism was also automated and plugged into many of the Dev and IT infrastructure in place. Problem was if new Unit Tests were added that did not quite fit the model in Perl, which the Nightly Framework was written in, then reporting would cover an entire suite of tests as one success or failure. When we added JUnit, it also caused some issues with both the reporting, and some issues with environments which was not always translated to the individual test; which was sometimes running Ant and JUnit started by Perl. It was a framework that worked for a time, but was not essentially scalable with new technology as we added new applications and languages to the product.
- Where was that Matrix again? Ah yes, the Test Matrix. Each item of functionality had a test point and a test case, so we wrote up something to cover it. This was a way to get the coverage we wanted, and we could say that if a test failed we knew what work was being done, and if a whole section of tests failed we knew a checkin was bad. So we would take note of it and spend more time later on testing that area. Technology here was not so important, though we tried to make sure all of the Matrix we wanted to put in was capable of being done so with the technology we chose, the fit was more important. Still we ended up with a blend of technologies, but that seems to be the norm. It became easier to manage though, in the point of view, where we could match tests with cases, though how this will go in the long run I can't perceive yet.
No comments:
Post a Comment