Friday, September 19, 2014

The Power of PowerShell pipelining

Every now and again I come back to PowerShell, it's useful as a tool, simple, clean and while I can sometimes say its easy to use, many times I smack my head on the desk.  Working in a Windows environment PowerShell scripts make doing many things easier, though it usually takes me some time to work them out.

For example, I am trying to output some results from failures on running SSIS Tasks, but I only really need to know certain statuses, or in most cases just the failures.  Focusing on the failures there can be a lot of them, if something fails at the beginning EVERYTHING runs and can easily fill up the log.  So while a command like:

    $Results = &'C:\Program Files\Microsoft SQL Server\110\DTS\Binn\DTExec.exe' /Project $Path /Package "$File.dtsx"

Can give me a huge result set, do I really need to display a couple of hundred lines of errors to the console, or to the Test Driver?  No.

I wanted to only pull out the error details, I am skipping over the step where I know there are errors here (just to make it easy) and what I want are the details to notify someone that "Hey, something went wrong here!  Here are some examples, want to know more, log in to the box and do some research!!"

So at first I step through everything (that's how I understand where things are working) and first came up with this:

foreach ($_ in $Results)
{
if ($_ -match "Description")
{
$exec_errors += $_
}
}

Now that is way too long.  A lot of white space, plus do I really need to step through that much?  No I don't, thankfully PowerShell makes this easy, since $Results is an array, I can pipeline it and step through it to match only what I want:

    $Results|?{ $_ -match "Description" }

This gets me only the data I want to display, the rest of what you get for the error messages is meaningless to convey the details of what went wrong.  I pipeline the $Results array into ? which is shorthand for the Where-Object and within the curly braces put what it is I was looking before in my if statement, under my foreach.  So now instead of a 3 or so line statement (not quite counting the curly braces there) I have 1 line!  Yes, there can be only 1.

Still I need someplace to put those results:

    $exec_errors = $Results|?{ $_ -match "Description" }

Now, I have an array of just the details I want and using $exec_errors I can output only what I need to communicate:

Write-Output "Errors in execution: "
if ($exec_errors.length -ge 2) {
Write-Output "$exec_errors[0] `n $exec_errors.length - 1"
} else {
Write-Output $exec_errors
}

I will have to see about making a one-liner of the rest later on.

Tuesday, September 9, 2014

SSIS Task Variables

In some current Testing I am doing work with SSIS.  One task imports data into a database that we baseline for future Test Cases, but to get the data in I need to modify some of the dates.  With SSIS there is an option to adjust data using Derived Columns, with that it's possible to adjust data using Task Scoped Variables; I'll have to find a reference but using project scoped variables did not work initially.

Basically the variables were created this way:

  1. Gave a descriptive name (need to have those so you know what to look for later!)
  2. Scope, was selected to be the Task being worked on, which was easy since the project only had one task for data loading.
  3. Data Type, since it was for Dates I made this DateTime
  4. Value, just hit enter/return here and it took the current date and time
  5. Expression, this is the real work here
    1. For Now this was just left as the default - GetDate()
    2. For a Year I made an expression - DATEADD("Year",1,(GetDate()))
    3. Same adjustments for Adding Minute or Month, just adjust Year
When using these, in the Derived Columns select the Expression column and in the upper right panel there is a tree for Variables and Parameters.  Select the User: and this will update the value for that column when the Derived Columns task runs.

Learning about DateAdd came from the Microsoft Site

Monday, August 11, 2014

Why History really relates to Testing

I am a History buff.  Of course I also went back to school and did a major in it, getting my Bachelor's from Boston University while still working in Software.  I was going to do a CS degree but honestly, much as I like it, programming during the day and for homework didn't really suit me.  After some discussion with my wife I ended up changing to History, since it was something I loved, I think it was a great decision since I got exposed to subjects and topics I never would have thought to take on my own.  From those I learned a lot about how to analyze situations and topics, do research and apply that to the question or topic at hand; something I do a lot of while Testing.

Testing IS History.  Regression Tests are a way of compounding a Historical Record of Fact that says, these things happened and if we want to prevent them we can do THIS or THAT.  Something that in History we often table about, the What If format is not only a plot line from Fiction novels but sometimes is a way of viewing an event from a certain perspective.  Of course as our understanding of events increases so does the Historical Record and our understanding of it, I find that Regression Tests are the same way.  The more I understand a certain issue, or feature, the more I can improve my test case and get the confidence level of the product higher.

When scheduling I rely on my knowledge of prior events, but being the person I am I document everything as I go along.  This gives me a record to use and base my future estimates on rather than giving my "gut feel" for a specific project or set of features.  When I did releases we often put times for specific steps to take, this way in long releases we were able to say when certain people were needed to help, or to even frame the maintenance window needed.

These are just two examples, but very deep and broad ones.

How do you use and document your prior knowledge on events and tests?  Do you document your test code, like a historical record, so others who come later can figure out what you were doing?


Sunday, April 27, 2014

The Person Month Revisited

I was walking down the street and noticed a few people were picking up trash in the road, thought to myself "hey, I bet with one or two more they get done real quick!"

Of course, then my thoughts turned to Software and then in some weird connection I jumped into the Man Month Myth, renamed the Person Month to be more gender neutral, and it occurred to me that yes it still happens and yes in some circumstance it may work.  Though those are slight, and slim.  It of course depends.

Let's look at some examples:

Street Cleaners: the more the merrier!  It doesn't take much domain knowledge to pick up trash on a street, or rake leaves from a yard.  All you need is some coordination and a slight understanding of tools.  Of course, coordination with those tools will make the job faster, a group of 4 or 5 teens could do well, if those are the only resources we have available.  It's like after a party at a house, or a bbq, the more people taking little tasks and doing them, the quicker the whole task get's done.  I don't want to be demeaning but at it's core there is not a lot of skill involved in cleaning up, using a broom is fairly simple as is carrying a bag to put trash in - at this point the main skill is knowing what is trash.  So at this point the skill level is not high, and the tool usage ability is low.

Too many cooks spoil the soup: yes, that adage does have some relevance.  The reason you want one cook is because he understands how this dish should taste, everyone is slightly different and the more you have the more additions you get until the soup is a mess.  Also, hardly tasty.  Not everyone has the same taste or background to know what a particular dish should be, the best restaurants do well because every dish they have has consistency but this is due to long hours of training and knowledge transfer.  So we have a medium skill level and a tool usage ability is getting higher, but now we have an added knowledge transfer quotient to add in as there is a lot to know about making good soup.

Software: it depends on the domain knowledge and skill set.  The high up the domain tree you go the more it requires a specific fit.  Sure you could add one or two more Engineers to your project BUT those Engineers had better be really familiar with the project, specs, and domain or you do more harm than good.  The specs have been discussed, meetings have been held to discuss code and flow and design, there may be a lot of documents, depending on your environment but there is also a lot of knowledge transfer.  There is a high skill level, a high usage requirement for tools and a lot of knowledge transfer.

So, can you add more people and get more done?  Yes, but only in specific situations.  If people have the skills, tool and domain knowledge then adding them in is not an impediment, but this is rarely the case.  Although if you have an environment with lots of switching between teams then some of this may be in place but you have to know.  If you don't analyze the situation first you will just make a mess of it.  Like the soup.

Friday, April 18, 2014

Sauce Connect Script

I am currently working with Sauce Labs and using their Cloud Based testing to work with some automation, what I wanted first was an easy way to handle starting up the tunnel to Sauce Labs.  This was a jar file so I tried a few other methods to set the environment up the way I want, but considering that most environments I am working with are Windows I figured may as well use the old stand by of PowerShell.

Here is my current version of the script, this just checks to see that the environment is set up so I can output the log files in the places I want, and checks if a job is running then just continues on and uses the existing one - for now.  I heavily comment code when I first write it, so it should be enough to answer most questions, if not comment here and let me know!

----

# Sauce Labs Connect script
# Description:
# This script checks and sees if there is an existing Job that has an open tunnel
# to Sauce Labs in the current session; because of the way PowerShell works you
# cannot easily detect what is running in another session.  Basically this script
# was intended to generate the necessary Tunnel and run the test scripts with
# fewer keystrokes
# Author: Michael Furmaniuk
# Last Updated: April 16, 2014

# Command line values, so there is no reliance on the Username and Access Key
# being set in the environment or in the configuration files (although if they already are...)
param([string]$u, [string]$k)
Set-StrictMode -v 2

#######################################
# Check and make sure some of the specific needs are met
# Sauce Connect
[string]$jobName = "sauceConn"
[string]$rootPath = ""
[string]$sauceLog = "sauceLog.log"
[string]$sauceReady = "sauceReady.log"
[string]$sauceJar = "C:\Sauce-Connect-jar\Sauce-Connect.jar"
# If necessary the following could be completed to make this easier to run, otherwise these are
# command line arguments given as $u $k
# [string]$u = ""
# [string] $k = ""

#######################################
# Start the Sauce Jar
function start-SauceConnectJar()
{
<#
.Synopsis
This function starts the Sauce Connect Jar utilizing some known and command line parameters
.Description
This function starts the Sauce Connect Jar utilizing some known and command line parameters
#>
[CmdLetBinding()]
Param(
[Parameter(Mandatory=$true)][string]$sauceLog,
[Parameter(Mandatory=$true)][string]$sauceReady,
[Parameter(Mandatory=$true)][string]$sauceJar,
[Parameter(Mandatory=$true)][string]$u,
[Parameter(Mandatory=$true)][string]$k
) #end param
Process
{
# Let's check and see what drives and temp files we have
if (Test-Path "C:\TEMP") {
$rootPath = "C:\TEMP"
} elseif (Test-Path "D:\TEMP") {
$rootPath = "D:\TEMP"
} else {
Write-Host "I can't seem to detect any disk drives here with a TEMP directory, that is a problem.`n"
exit
}
# Build up the file paths
$sauceLog =  $rootPath + "\" + $sauceLog
$sauceReady = $rootPath + "\" + $sauceReady
# If these already exist we want to remove them
if (Test-Path $sauceLog) {
Remove-Item $sauceLog
}
# Note: if this call fails then there is an existing job in another session
# need to somehow handle this and fail gracefully: this is a TODO
if (Test-Path $sauceReady) {
Remove-Item $sauceReady
}
# Start Sauce Connect and see what is returned for the process
# Build up the latter part of the script block
$commandLineOption = "-l $sauceLog -f $sauceReady -d"
# Now for the full command line
$scriptblock = [scriptblock]::Create("java -jar $sauceJar $u $k $commandLineOption")

# Now let's start this as a Job, next line is for debugging
# Write-Host "Starting Sauce Connect with: $($scriptblock).`n"
# Should now be suppressing the start text
$output = Invoke-Expression "start-job -name $jobName -ScriptBlock { $scriptblock } 2>&1"

# Check for the Ready File
while (!(Test-Path $sauceReady)) {
Write-Host "Sauce Connect is still starting...`n"
sleep(10)
}
# Next statement is for debugging
# Since you only get a Tunnel ID when the connection is complete using this as a defining
# point to know the tunne is up, next statement was for debugging
# Write-Host "Sauce Connect should be up...getting the Tunnel ID.`n"
# Now acually getting the Endpoint/Tunnel ID as a verification step
get-SauceEndPoint $sauceLog
}
}

#######################################
# Get the endpoint ID from the log
function get-SauceEndPoint()
{
<#
.Synopsis
This function scans the initial Sauce Connect log file for the EndPoint ID to pass back
.Description
This function scans the initial Sauce Connect log file for the EndPoint ID to pass back
to the User through the console, so it can be used by the User to access the active Tunnel.
At some point this may be used for further automation
#>
[CmdLetBinding()]
Param(
[Parameter(Mandatory=$true)][string]$sauceLog
) #end param
Process {
# Now that the Connection is up, and the Ready File is in place
# read the log file to get the Tunnel ID or endpoint in the case of Java
$runningLog = Get-Content $sauceLog
# Run a regex to get the endpoint line, only one in the file at a time
# when this script runs, then print this out so it can be used (or saved for something else later)
$myMatch = $runningLog -match "endpoint ID:\s.*$" | %{$_ -split "ID:\s"}
# Next statement was a way to message that the Tunnel is up, if not needed comment out
Write-Host "Endpoint ID this time is: $($myMatch[1])"
}
}

#######################################
# The Sauce branch depending on what is running,
# I can either start a new Job or shut it down as/if necessary
$jobs = Get-Job 2>&1
if ($jobs) {
if ($jobs.State -eq "Running") {
# There is an existing Job, either we want to stop it and restart or drop out
# Right now going with using an existing Tunnel
Write-Host "An existing job is running, quitting for now."
Write-Host "...but getting the EndPoint ID to be useful."
get-SauceEndPoint $sauceLog
exit
}
elseif ($jobs.Name -eq $jobName -and $jobs.State -eq "Completed") {
Write-Host "Found $($jobs.Name) that was $($jobs.State) getting rid of it...`n"
Remove-Job $jobName
# Now that the previous one is gone, let's move on
Write-Host "Removed the old job, so starting a new Tunnel.`n"
start-SauceConnectJar $sauceLog $sauceReady $sauceJar $u $k
} else {
Write-Host "Weird, found a job but its not running, it's $($jobs.State).`n"
}
} else {
# Shouldn't be any other states to worry about at this point
# so just start the Jar file and move along
Write-Host "Nothing running, so starting the Tunnel.`n"
start-SauceConnectJar $sauceLog $sauceReady $sauceJar $u $k
}

Tuesday, January 21, 2014

My 2013 Year in Review

Since January seems to be the time for reflection, and looking forward, I'll oblige and review the major milestones I had for 2013 and then see if I can learn anything from it all.

The year began with a compressed timeline to set up a new hosting site, the SharePoint environment we had in one hosting center needed to be recreated in another as we had to move data centers to a new provider.  Basically I spent two months prior to the end of the year, and most of January 2013 getting the environment ready, this provided some much needed updates to the existing scripts and documentation we used to be able to create these environments.  There were some mistakes in setting up the environment, I know now the best way NOT to update IIS redirects, and have much more experience with those under my belt.  In February we brought the site up and live, with some delay as the method I was using to copy over files that were needed for various apps was not completing properly and with some work with the IT guys we got that worked out.

Of course that hosting site lasted about two months until we started getting issues with up time, even having a 36 hour site downtime due to the center trying to update their disc software for the virtual machines.  I am not a hardware guy but apparently the disk update did not go well and brought down everything on that disk for a day and a half, we got reports for the first 6 hours or so then nothing until the site came up.  We knew it was up before they called us.

So another site move happened!  This one was much smoother and with the documentation updated it went much better.  Still two in one year,quite a feat!

All through the year I am also updating the Automation Framework that runs the Regression Tests, Feature Tests, and Acceptance Tests I have in place.  This is a Selenium setup with Web Driver, SpecFlow to utilize some BDD, driven with PowerShell scripts and written within Visual Studio so I get some experience coding in C#.  It's a nice framework and has had some awesome improvements and abstraction added, I'd like to do some more but have not had a chance yet.

I completed my O'Reilly online course in JavaScript programming, I learned a lot in that and am still amazed that I passed considering that I think my own programming skills are subpar.  I have learned how better to structure my code, and have improved my own Test Framework with abstraction and reading from external data files so its easier to update information when our site has changes.

With the PowerShell scripts that we have for our Build and Deploy network I have added some improvements and also am planning a PowerShell 3 update for 2014.  I've already worked out some serious issues with the changes and how we will deal with them, I just need to test for them and then begin implementation.

There is a site redesign coming, so I need to adjust my Test Framework, determine how to deal with changing existing redirects to new locations and also add new ones for people looking for information from the outside.  Should be fun!

All in all a good year and I have some interesting work coming up in 2014 with log scanning for people trying to access some information on our site that is ONLY tracked in our IIS files, since the data is on our secondary site that is not under Google Analytics tracking.  Can't wait!

Wednesday, October 23, 2013

No Software is Perfect, but the ACA was not the best

"If you believe the doctors, nothing is wholesome; if you believe the
theologians, nothing is innocent; if you believe the military, nothing is safe."
Lord Salisbury

To paraphrase, "if you think a software deployment never has problems, nothing is shipped".

Originally I started this post thinking about deployments but with the release of the ACA web site, portal, whatever it really is I saw a very public forum airing something on which I deal with on a consistent basis.  Anyone, and I mean ANYONE, who has worked in software has been involved with or seen a botched deployment on code either being shipped to customers, or as is more the case these days, on code released to a live site.  Things generally go well, and the scale of problems varies, but it is always there.  To mitigate we test, check, use the code in an environment that will simulate the production environment.  There are too many test types to mention that can be used, but in any of them actually USING the code or application will show up issues of some kind, at the very least.  I will say something that anyone working in tech knows, No Software is Perfect.

Listening to On Point on NPR, I support Public Radio and typically I am agreement with most of what Tom Ashbrook says but this time I sadly think he was out of his depth.  Sure it looks bad, but this is a political hot potato and anything that could have gone bad would be a candidate for hyperbole.  Was the rollout of the government web site bad?  Oh yes, just from the comments I wonder what sort of process they ran it through, but doing this for a living I know how hard it is and what it takes.  My older relatives who barely use computers would probably get a different perspective.  Yet I also know people who have worked on government projects, even looking back at many of the governments weapon's procurement and development programs, none of this should be a surprise.  The US Government still does a large amount of work on a waterfall process where they just start down a road and keep driving, filling up the tank as it gets empty so the work continues and many times no one is navigating, so the project just keeps driving along.

Still, after hashing through many of the reports over the past two weeks here is what I see as the major issues, and yet I see this on many projects, some even today, and no one has died yet.

  • No Testing, or whatever kind there was seems to be minimal.  If the site was crashing as people said I would have to ask what sort of Load Testing there was.  I don't even think I want to bring up Security testing.
  • Coding done by Government Contract, seems like a big waterfall of a project with a process to match, seems like there was little Acceptance testing done until the end when it was deployed.
  • Outsourcing, not a bad thing, but was the company that really doing this the best choice?  From some of what I had seen few traditional software companies wanted in on this, and the company that did do it seems to have a spotty track record at best.
  • No Communication or Transparency, this was a big project (I am using that sarcastically since more Web Applications and sites don't have millions of lines of code, but it's big in the scale of visibility) and with the size it seemed much of the status was held close to the leaders.  Sure its a political liability, but when you aren't transparent people make up their own rationales.
  • Tech Surge, oh yeah we all know throwing more people and money at something gets it done faster!  Right?
  • Were the people really the "best and brightest"?  Or if they are only coming in now why would they want to refactor code?  Seems a waste of a skill set to me.  And if they are only coming in now then who was working on this?
  • Bureaucracy.  'nuff said.

Rocky roll outs are a norm, but maybe this time people who actually run projects, or those on them that they feel are going wrong can point out to the ACA web site rollout and say, "let's not be like them."

Maybe, just maybe, then, someone will listen.