In my current position, part of my work involves utilizing a Test Harness to perform some distributed testing. This is a Harness written in Perl, which is one of my favorite languages, it covers multiple operating systems (windows, Linux, Solaris, HPUX and AIX) and while parts of it are the usual Perl hack most of it is in Object-Perl. The developer of this and I are working to get it even more distributed, objectified (if thats a word) and also just spreading out how to use this tool so we can also allow others to use it for Development testing.
We ran it on one of the latest builds, and sure enough there were lots of failures, so we talked and the initial response was that this happens and we should run it again. So we did, and of course there were less failures. Over the weekend some network upgrades happened, never a good time in a test cycle, so I started running them again after the lab was back up. I got different errors, and in come cases fewer. So I started thinking, what was it we were trying to prove here? Did we simply want to be running the harness over and over again until the errors were gone? Did we want to investigate the initial errors and see where the problems were from? Did we have enough confidence in the Harness to know the results we received were accurate?
After some discussion, it came down to what I considered the usual response. Until we know its not the Test Harness creating the issues, there is no reason to report the issues as bugs, unless its for the Harness. We could run some of the tests manually, but the reason for having the Harness is to take the manual time and cut it down...so we are in that state where we debug the Test Harness until we are sure it works, in parallel we will use it on the product and try to get some confidence with repeatability on the Test Results. I don't think that will happen for awhile, but at least we will spend time getting to see how the tool reacts, and be able to judge better how it will work out in the future.
I'm old. I'm cranky and think the Luddites were on to something.
Wednesday, June 28, 2006
Wednesday, June 21, 2006
Those pesky nightly tests...
It's come up a few times over the years, how do I make sure my coverage is good and make sure we are testing all we should all the time? Or in time enough? Is there enough time to test all we want to? The questions just keep coming over and over.
I've actually ended up doing this a couple of different ways over the years, and I don't guarantee any one of them will work for anyone else, and I am not exactly sure they worked for me all the time either. There was always more analysis we needed to do.
I've actually ended up doing this a couple of different ways over the years, and I don't guarantee any one of them will work for anyone else, and I am not exactly sure they worked for me all the time either. There was always more analysis we needed to do.
- Jump in there and do it. Oh yes, we jumped headfirst into making some nightly tests, anything was good so long as it tested SOMETHING. This was the worst way, and it was only done because the Engineering VP wanted something in place, mostly I think it was a way to tell the Board that we have automation or nightly testing in place. Has this topic ever been covered in an in-flight magazine? While it was a good exercise in getting QA involved with the Nightly Tests (we previously had none) and allowing us some time to do some scripting we actually got more out of the scripts used for our regular Test Phase. This is the worst way to do it, since you are basically just doing anything for the sake of doing something, I think we ended up calling this the Exploratory Nightlies; no plan, no review of technology, just seat of your pants Nightly Testing.
- Use what Dev had. As there were some tests that Developers had written already, in both Perl and a few other Unit Test frameworks we had a good setup. Tests were written, established, covered needed areas of code as these were always written to cover bug fixes; they had Unit Tests as well but they were shorter to run and the Nightly Tests ran longer so they became the place to put all longer running tests. This worked fine, as the reporting mechanism was also automated and plugged into many of the Dev and IT infrastructure in place. Problem was if new Unit Tests were added that did not quite fit the model in Perl, which the Nightly Framework was written in, then reporting would cover an entire suite of tests as one success or failure. When we added JUnit, it also caused some issues with both the reporting, and some issues with environments which was not always translated to the individual test; which was sometimes running Ant and JUnit started by Perl. It was a framework that worked for a time, but was not essentially scalable with new technology as we added new applications and languages to the product.
- Where was that Matrix again? Ah yes, the Test Matrix. Each item of functionality had a test point and a test case, so we wrote up something to cover it. This was a way to get the coverage we wanted, and we could say that if a test failed we knew what work was being done, and if a whole section of tests failed we knew a checkin was bad. So we would take note of it and spend more time later on testing that area. Technology here was not so important, though we tried to make sure all of the Matrix we wanted to put in was capable of being done so with the technology we chose, the fit was more important. Still we ended up with a blend of technologies, but that seems to be the norm. It became easier to manage though, in the point of view, where we could match tests with cases, though how this will go in the long run I can't perceive yet.
Thursday, June 15, 2006
Flavor Of The Month - Part 2 (Agility with Agile)
At the same company that never got off the ground with Six Sigma, though we had a pretty efficient cappacino process, at some point someone got tuned in to Agile. This time we had a meeting between the VP of Engineering, the President of the company, the Dev Director, the Architect (who became the other Dev Director...long story there) and me (the QA Manager). We sat and discussed it, the benefits of it, then with the Cockburn Agile book in hand went off to read about it and see what we could do about implementing the process. The system was nice, QA and Dev working together, with PM to get the projects completed on time. This was all done because we were consistently late with the projects, requirements changed during the development and test cycles, often times at the last minute, while also trying to do these ambitious projects in short cycles because "we were the best!". You can probably guess what happened.
After a couple of days of discussion we broke it to the teams, they would get broken up into 3 groups, each one focused on a different product the company provided, and they would all sit in the open areas designated for the groups. QA moved out of the QA lab (which I will admit was pretty noisy with the 20 or so computers and video hardware) and into the Agile Groups; which I will call Groups 1, 2 and 3. Furniture was arranged, the spaces were broken up with help from everyone and the stuff was set up over the weekend so that on Monday morning everything was ready to go. Here's how it all broke down...goes to show, even with good intentions if the entire business process doesn't change nothing will really get fixed.
Group 1: This one I called the "Puppy Pound", mostly due to the half walls ringing the space outside my office, there were about 16 desks around the edge where everyone sat; reminded me of a puppy pen in a pet store. This space was also on the way to the kitchen from the rest of the company, so the opening that let people walk though was closed off at one point to keep the noise down; didn't do anything about the noise in the kitchen though. The group worked together fine, QA had two members with about 12 developers, one doc person (who was never hired) and a PM delegate (who surprisingly worked on maybe half the projects until she left). The Team Lead was constantly called into fixing legacy issues on the core product, items that were always pushed off to be fixed in the next release because our DB was overtaxed and constantly being updated without enough testing to all the components, plus we had excessive amounts of data; we tried to manage the changes in testing but sometimes failed - though we did know all the changes happening. Another member on the team worked with the Architect on special projects, that were supporting the core product, but he was distracted. So the team while being together did what they could, but did not do a lot of new stuff, they ended up being a sort of triage and special projects team; my personal view.
Group 2: This was a group that was a bit smaller but had more PM and Doc people, including a UI person! They had really high walls around them on most of the sides, they did the next generation project and were overseen by the Architect who sat in the next room. They did ok, but they also had a lot of turnover, the project they were on was behind for months, and had extremely tight schedules, which I disagreed with but attempted to work out anyway. This group was the unfriendliest group, they never really talked as a team (you'll soon see why), and even by sitting next to each other, the QA members sometimes had no idea what was going on. The Architect never managed them effectively, and the dates slipped because what oversight there ended up being was minimal, turned out when one of the DB guys left he hadn't done work for quite awhile and no one seemed to have noticed. Apparently the Architect had also told the Developers on the team to not really mix with anyone else, they didn't, but a few of them talked to the other members in the group but it was nothing constant. I kept saying the dates were in danger, but the Architect claimed it was all under control, as he was a coffee buddy with the VP I was ignored (the two of them beat anyone into submission who disagreed with them), eventually I left because it was not worth my time to deal with someone who was not going to listen. The product went out a little late, and much scaled back, and took months to work out the kinks. From everyone I talked to about the project afterwards, it was a shambles. Did I mention the schedule for this was decided upon by the company officers at a meeting in the Spring, for a Winter release, without knowing the technology or talking to anyone who might work on it? Did I also mention that the main database and software we were using to base this work on, was from a release that the company which created it was no longer developing, or supporting, after our initial release date? Oh yeah, those are kind of important.
Group 3: This one personifies my personal belief in geography killing team work. This one had an area that snaked along the wall, with one of the main doors in the middle of it, the QA and Dev members gravitated to one side, PM and Doc on the other. Never the twain really met. They pushed hard and got a lot of things out on tight deadlines, the Dev Director was totally involved in this and often was seen coding with the rest of them. Dev and PM rarely seemed to talk. There was some change as things went on, and a reliance on technology from a company in another country, that constantly claimed to do things that it did not. When they were on site they would tweak and fix things for us, but never trained us on everything they were doing, though we tried and did get some training by watching them over time; even getting friendly with one or two who helped us out by showing us some tricks. The weakness here was a reliance on technology we had no control over, which was being pushed to the edges of what it was claimed it could do, and be on a schedule that was totally unrealistic. Items Agile was not going to solve.
The company is in the process of being liquidated, right after I left there was a mass exodus, which was not my intent I actually did leave for personal reasons. Still much here is my view and from discussions I had with people who were still here, if you worked at this place (and I am sure you will know it) then take this all with a grain of salt. We were doomed from the beginning.
After a couple of days of discussion we broke it to the teams, they would get broken up into 3 groups, each one focused on a different product the company provided, and they would all sit in the open areas designated for the groups. QA moved out of the QA lab (which I will admit was pretty noisy with the 20 or so computers and video hardware) and into the Agile Groups; which I will call Groups 1, 2 and 3. Furniture was arranged, the spaces were broken up with help from everyone and the stuff was set up over the weekend so that on Monday morning everything was ready to go. Here's how it all broke down...goes to show, even with good intentions if the entire business process doesn't change nothing will really get fixed.
Group 1: This one I called the "Puppy Pound", mostly due to the half walls ringing the space outside my office, there were about 16 desks around the edge where everyone sat; reminded me of a puppy pen in a pet store. This space was also on the way to the kitchen from the rest of the company, so the opening that let people walk though was closed off at one point to keep the noise down; didn't do anything about the noise in the kitchen though. The group worked together fine, QA had two members with about 12 developers, one doc person (who was never hired) and a PM delegate (who surprisingly worked on maybe half the projects until she left). The Team Lead was constantly called into fixing legacy issues on the core product, items that were always pushed off to be fixed in the next release because our DB was overtaxed and constantly being updated without enough testing to all the components, plus we had excessive amounts of data; we tried to manage the changes in testing but sometimes failed - though we did know all the changes happening. Another member on the team worked with the Architect on special projects, that were supporting the core product, but he was distracted. So the team while being together did what they could, but did not do a lot of new stuff, they ended up being a sort of triage and special projects team; my personal view.
Group 2: This was a group that was a bit smaller but had more PM and Doc people, including a UI person! They had really high walls around them on most of the sides, they did the next generation project and were overseen by the Architect who sat in the next room. They did ok, but they also had a lot of turnover, the project they were on was behind for months, and had extremely tight schedules, which I disagreed with but attempted to work out anyway. This group was the unfriendliest group, they never really talked as a team (you'll soon see why), and even by sitting next to each other, the QA members sometimes had no idea what was going on. The Architect never managed them effectively, and the dates slipped because what oversight there ended up being was minimal, turned out when one of the DB guys left he hadn't done work for quite awhile and no one seemed to have noticed. Apparently the Architect had also told the Developers on the team to not really mix with anyone else, they didn't, but a few of them talked to the other members in the group but it was nothing constant. I kept saying the dates were in danger, but the Architect claimed it was all under control, as he was a coffee buddy with the VP I was ignored (the two of them beat anyone into submission who disagreed with them), eventually I left because it was not worth my time to deal with someone who was not going to listen. The product went out a little late, and much scaled back, and took months to work out the kinks. From everyone I talked to about the project afterwards, it was a shambles. Did I mention the schedule for this was decided upon by the company officers at a meeting in the Spring, for a Winter release, without knowing the technology or talking to anyone who might work on it? Did I also mention that the main database and software we were using to base this work on, was from a release that the company which created it was no longer developing, or supporting, after our initial release date? Oh yeah, those are kind of important.
Group 3: This one personifies my personal belief in geography killing team work. This one had an area that snaked along the wall, with one of the main doors in the middle of it, the QA and Dev members gravitated to one side, PM and Doc on the other. Never the twain really met. They pushed hard and got a lot of things out on tight deadlines, the Dev Director was totally involved in this and often was seen coding with the rest of them. Dev and PM rarely seemed to talk. There was some change as things went on, and a reliance on technology from a company in another country, that constantly claimed to do things that it did not. When they were on site they would tweak and fix things for us, but never trained us on everything they were doing, though we tried and did get some training by watching them over time; even getting friendly with one or two who helped us out by showing us some tricks. The weakness here was a reliance on technology we had no control over, which was being pushed to the edges of what it was claimed it could do, and be on a schedule that was totally unrealistic. Items Agile was not going to solve.
The company is in the process of being liquidated, right after I left there was a mass exodus, which was not my intent I actually did leave for personal reasons. Still much here is my view and from discussions I had with people who were still here, if you worked at this place (and I am sure you will know it) then take this all with a grain of salt. We were doomed from the beginning.
Friday, June 9, 2006
Documentation
When I have had projects come in previously there is usually not much in the way of documentation, with minor updates or when I was dealing with a service provider on the web we had the occasional page update. Occasionally we had a Release Note, or a help file, that was sent with the software or was linked in to a web page somewhere, but then the question would come up....how much do we test the documentation?
Granted, the Documentation Groups know how to spell and use grammar check in whatever software they use, sometimes using the software to check the steps out; that has to be the best group I ever worked with! Still, when we get a package of code into QA we like to look at everything, and I always add a little time somewhere for documentation. Even though there is someone reviewing the pages, I have encountered the odd missing comma, double period, mistyped word or homonym. So I at least try to look over the page before it goes out, and my keen editorial third eye often finds something wrong on the page before I read a word.
This was something that I was reminded of recently when helping out a friend. She is writing documentation in Chinese that needs to be translated to German. To handle the intermediate step, though i have never seen Chinese to German translations I bet its interesting, they were going to do Chinese to English to German, but the English was awkward. I was asked to review it, and sure enough there were some edits needed. I made a few, tweaked a word or two, then sent it back to her; I have not heard back but I hope it made for a better document to be made into German. Not that I will ever understand it.
In major projects I push to have an entry on the project schedule to get the docs in, and in with enough time for revision as there is usually something that may need a tweak or two. Never a lot of time, but enough to make sure we cover it completely. Its not really Testing the Documentation, more like a review, but in some ways it is important to make sure what the Customer has will also convey in a language they can understand what the software should do. To me its an added dimension to Testing, more of a tangent, but still part of the whole. As a generalist I find that QA is often more suited to this than say Developers, as we are technical enough to get down into the guts of the code and test appropriately, but can still stand back and as a Customer think and use it like they do. This adds a good portion of completeness to me, and I think helps increase the Customer Experience a bit better while also making sure what they are delivered is good, understandable and won't make them think that something was rushed.
Granted, the Documentation Groups know how to spell and use grammar check in whatever software they use, sometimes using the software to check the steps out; that has to be the best group I ever worked with! Still, when we get a package of code into QA we like to look at everything, and I always add a little time somewhere for documentation. Even though there is someone reviewing the pages, I have encountered the odd missing comma, double period, mistyped word or homonym. So I at least try to look over the page before it goes out, and my keen editorial third eye often finds something wrong on the page before I read a word.
This was something that I was reminded of recently when helping out a friend. She is writing documentation in Chinese that needs to be translated to German. To handle the intermediate step, though i have never seen Chinese to German translations I bet its interesting, they were going to do Chinese to English to German, but the English was awkward. I was asked to review it, and sure enough there were some edits needed. I made a few, tweaked a word or two, then sent it back to her; I have not heard back but I hope it made for a better document to be made into German. Not that I will ever understand it.
In major projects I push to have an entry on the project schedule to get the docs in, and in with enough time for revision as there is usually something that may need a tweak or two. Never a lot of time, but enough to make sure we cover it completely. Its not really Testing the Documentation, more like a review, but in some ways it is important to make sure what the Customer has will also convey in a language they can understand what the software should do. To me its an added dimension to Testing, more of a tangent, but still part of the whole. As a generalist I find that QA is often more suited to this than say Developers, as we are technical enough to get down into the guts of the code and test appropriately, but can still stand back and as a Customer think and use it like they do. This adds a good portion of completeness to me, and I think helps increase the Customer Experience a bit better while also making sure what they are delivered is good, understandable and won't make them think that something was rushed.
Wednesday, June 7, 2006
Transition Plans
Being as I am changing jobs this week, from a pretty large startup to a multi-national, its been on my mind that the "knowledge transfer" needs to be done. I've done it a few times before, and I am well known to be a document hound; if something has to be done its documented. No questions asked. We'll probably end up doing it again in 6 months or more, so I like to review how we did it previously.
Ever since I worked at a company I will call NaviPath, which was its name and since it no longer exists I figure its safe, we had an initiative to get ISO certified. My job at NaviPath was in Release Engineering, but I also did Smoke testing of the builds to make sure everything worked properly, I've done Release and QA in my many years. I have no idea why ISO, but we needed to do it. There was the documentation reviewer who made sure everything going into source control, in the appropriate document form, was correct and numbered properly. The pyramid went around to test people on the procedures (you can be sure the pyramid got lots of movement by others as well) and to make sure they understood what we were doing. True the company folded before the initiative finished but from it all I took the document format with me, because it seemed a good QA fit.
So what does this have to do with Knowledge Transfer? All will become clear.
When I started my next company, I again wrote down procedures and processes. Using the ISO format, and using my own numbering, I kept track of tasks I was doing. Generating a Test Plan, short and simple document on creating Test Cases. Installing parts of the software, a document with small screen shots that showed all the steps and things to watch for, gotchas that happen and everything else we could think of; these increased over time. All in all, though not all contained excessive detail, we had about 30 documents by the end of my employment as the QA and Release Manager; as I brought new people in and they wanted to know how to do something like rebuild the QA servers in the lab - I pointed them to the documents. Dev wanted to know how something was done in QA, there was a doc that explained it. Or I talked it out with them, then wrote something to be used the next time. These documents became our official departmental records, and even after I left I heard from people who stayed behind that they appreciated them because they had forgotten to ask about one of the tasks we only did once or twice a year, but the document was there with all the steps.
So in looking at transitioning now, I can think of all the work I do and have done in time, but there might be something I miss. Or maybe I am not thinking of it in the last few days, because I am transitioning all the work I am doing now, and anything that is on my mind for planning in the next month. Not the two hour task I did 6 months ago, which won't need to be done until next year; if its written down and documented properly then I don't need to. Or I can review all the documented items and remind myself of anything that is needed. I've also found that going over all the documents with someone brings up other tasks that were slight, or may need to be added somewhere. Sure there is a maintenance cost, but after 3 years I found that 10 minutes a week to review the documents was enough to keep them updated, and ones that needed drastic changes either were deprecated for new ones or we added a new section or document for the new way; if we had legacy items that still required the old way. Even though I spent 4 days going over everything I knew about the systems, how we tested and the best ways to test the next generation, I also spent time reviewing all the departmental documents that I had written over time.
The Knowledge you leave behind as an artifact, is just as important as handing off the project you are on now, and often that work you leave documented somewhere will help everyone out more in the long run. That is how I would rather be remembered.
Ever since I worked at a company I will call NaviPath, which was its name and since it no longer exists I figure its safe, we had an initiative to get ISO certified. My job at NaviPath was in Release Engineering, but I also did Smoke testing of the builds to make sure everything worked properly, I've done Release and QA in my many years. I have no idea why ISO, but we needed to do it. There was the documentation reviewer who made sure everything going into source control, in the appropriate document form, was correct and numbered properly. The pyramid went around to test people on the procedures (you can be sure the pyramid got lots of movement by others as well) and to make sure they understood what we were doing. True the company folded before the initiative finished but from it all I took the document format with me, because it seemed a good QA fit.
So what does this have to do with Knowledge Transfer? All will become clear.
When I started my next company, I again wrote down procedures and processes. Using the ISO format, and using my own numbering, I kept track of tasks I was doing. Generating a Test Plan, short and simple document on creating Test Cases. Installing parts of the software, a document with small screen shots that showed all the steps and things to watch for, gotchas that happen and everything else we could think of; these increased over time. All in all, though not all contained excessive detail, we had about 30 documents by the end of my employment as the QA and Release Manager; as I brought new people in and they wanted to know how to do something like rebuild the QA servers in the lab - I pointed them to the documents. Dev wanted to know how something was done in QA, there was a doc that explained it. Or I talked it out with them, then wrote something to be used the next time. These documents became our official departmental records, and even after I left I heard from people who stayed behind that they appreciated them because they had forgotten to ask about one of the tasks we only did once or twice a year, but the document was there with all the steps.
So in looking at transitioning now, I can think of all the work I do and have done in time, but there might be something I miss. Or maybe I am not thinking of it in the last few days, because I am transitioning all the work I am doing now, and anything that is on my mind for planning in the next month. Not the two hour task I did 6 months ago, which won't need to be done until next year; if its written down and documented properly then I don't need to. Or I can review all the documented items and remind myself of anything that is needed. I've also found that going over all the documents with someone brings up other tasks that were slight, or may need to be added somewhere. Sure there is a maintenance cost, but after 3 years I found that 10 minutes a week to review the documents was enough to keep them updated, and ones that needed drastic changes either were deprecated for new ones or we added a new section or document for the new way; if we had legacy items that still required the old way. Even though I spent 4 days going over everything I knew about the systems, how we tested and the best ways to test the next generation, I also spent time reviewing all the departmental documents that I had written over time.
The Knowledge you leave behind as an artifact, is just as important as handing off the project you are on now, and often that work you leave documented somewhere will help everyone out more in the long run. That is how I would rather be remembered.
Friday, June 2, 2006
dding QA to Scrum (one Company's view)
My current company has been running Scrum for a Development process but as this tends to be a wrapper for Development, at least in my view, its been tough to fit QA into this process. We do have release iterations where there is a focus on making sure that the code is stable, and we focus specifically on testing and bug fixing, though we do tend to freeze late in the iteration so the final testing gets cut. What's new about that?
So in trying to decide how we wanted to set up QA the thinking came that QA needs to be more in the Analysis camp and not so much in the Process camp. This is kind of a switch, at least to those of us who have been squarely in the Process camp, as we have less documentation now to develop test cases and plan out mini-release cycles every Iteration; but is this really much of a change? The way we are changing to is most of the Release Testing will get backloaded into the Release Iterations, with the preparation of Test Cases and Unit Testing being done ahead of time and increased during the iterations.
With each Iteration QA can analyze the new functionality being developed, discover the Test Cases needed as code is written, generate the Unit Tests necessary to fulfill the Test Cases; QA can pair up with the Developers to check their Unit Tests as well. We tend to have two levels of Unit Tests, one which we do call Unit Tests that the Developers create and use to test their code and Black Box tests which QA creates and is one step up from the Unit Tests which can start to test the classes as a whole and start Path testing. Did I mention we are doing a lot of Java and JUnit? This allows both the Unit and Black Box tests to be run in our Nightly Test Suite, so we have continual coverage once we add a new test. So we cover method and function testing at the low level, and the Functionality and Use/Test Cases at the high level, providing more coverage as the work progresses.
So if this is followed, we generate new cases each Iteration, adding to our Test Plans, which may or may not be run during pre-Release Iteration, and we increase our Automation. Not much different really than being in the Process camp, except we are sticking with the Developers to find out what we need to do and test, rather than looking at documentation and understanding the Use and Business Cases. If that Case information is needed we can get it from the Product Owner, or Product Director depending on your terminology, so the only change here is the information source, rather than written materials its going to be dynamic where we gather the data we need from the flows between the Developers and the rest of the Scrum Team.
Originally we thought this might be difficult, and a tough change, but when we spelled it out then its actually not much different than we do it now, its not the What that has changed but the How. Of course none of this has been in practice for very long, so it will depend on how its handled over the coming Iterations, it would be interesting to see if it were not for the fact that I am leaving the company this month.
So in trying to decide how we wanted to set up QA the thinking came that QA needs to be more in the Analysis camp and not so much in the Process camp. This is kind of a switch, at least to those of us who have been squarely in the Process camp, as we have less documentation now to develop test cases and plan out mini-release cycles every Iteration; but is this really much of a change? The way we are changing to is most of the Release Testing will get backloaded into the Release Iterations, with the preparation of Test Cases and Unit Testing being done ahead of time and increased during the iterations.
With each Iteration QA can analyze the new functionality being developed, discover the Test Cases needed as code is written, generate the Unit Tests necessary to fulfill the Test Cases; QA can pair up with the Developers to check their Unit Tests as well. We tend to have two levels of Unit Tests, one which we do call Unit Tests that the Developers create and use to test their code and Black Box tests which QA creates and is one step up from the Unit Tests which can start to test the classes as a whole and start Path testing. Did I mention we are doing a lot of Java and JUnit? This allows both the Unit and Black Box tests to be run in our Nightly Test Suite, so we have continual coverage once we add a new test. So we cover method and function testing at the low level, and the Functionality and Use/Test Cases at the high level, providing more coverage as the work progresses.
So if this is followed, we generate new cases each Iteration, adding to our Test Plans, which may or may not be run during pre-Release Iteration, and we increase our Automation. Not much different really than being in the Process camp, except we are sticking with the Developers to find out what we need to do and test, rather than looking at documentation and understanding the Use and Business Cases. If that Case information is needed we can get it from the Product Owner, or Product Director depending on your terminology, so the only change here is the information source, rather than written materials its going to be dynamic where we gather the data we need from the flows between the Developers and the rest of the Scrum Team.
Originally we thought this might be difficult, and a tough change, but when we spelled it out then its actually not much different than we do it now, its not the What that has changed but the How. Of course none of this has been in practice for very long, so it will depend on how its handled over the coming Iterations, it would be interesting to see if it were not for the fact that I am leaving the company this month.
Subscribe to:
Posts (Atom)