Friday, June 19, 2015

Automating 100% of the Test Cases is Impossible

I guess I've been in the software testing profession long enough to hear a new generation of tech managers voicing some of the same discredited notions I thought had been put to rest years ago.  I guess there is a never-ending need to educate people about our profession.

Some background:  I work for a big company in Seattle which divides the testers into two groups--SDETs (software development engineers in test) and manual testers.

A manager mused during a team meeting that the company was perhaps moving in the direction of employing only SDETs and automating most of the testing of the website and mobile apps.  After the meeting one of the manual test leads confessed that he had been seriously worrying about job security because of this sentiment.

But wait, I told him.  Was our test automation capable of discovering the error landing page showing in a sidebar iframe on the company's website that my wife discovered this week?  I don't think so.

I told him to relax.  It's impossible to replace manual testing.  100% test coverage of user interfaces via automation is still and will continue to be impossible.  Prove me wrong.

Thursday, October 25, 2012

InvalidOperationException - The type Database cannot be constructed. You must configure the container to supply this value.

I spent two hours resolving this issue today.  I was trying to resurect some tests from an old test project that I hadn't created.

The failing line of code was:
 static Database profilesDB = DatabaseFactory.CreateDatabase("profiles");

This project was using the Microsoft Enterprise Data Block version 5.0. I'm posting this solution because it was the only one that worked, after trying a dozen other things. Add the following to your App.config file:

<runtime>
   <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
     <qualifyAssembly partialName="Microsoft.Practices.EnterpriseLibrary.Data" fullName="Microsoft.Practices.EnterpriseLibrary.Data, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
   </assemblyBinding>
</runtime>

Joy!

Saturday, September 29, 2012

Querying TFS Test Managment Service from C#

Query Test Plans from a C# app


Our shop uses Visual Studio 2010 and Microsoft Test Manager (MTM) for entering and running manual test cases. They wondered how to export a flat list of test cases to Excel that captured all the test case steps and the result of the last test pass. Interestingly, we couldn't find any way to do it from MTM or Visual Studio.

I did some searching and cobbled together a little Winform app in C# that has worked great for our purposes. I removed error handling for brevity but left in the core stuff.

Set References to the TFS APIs

using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.TestManagement.Client;
using Microsoft.TeamFoundation.WorkItemTracking.Client;
using Microsoft.TeamFoundation.VersionControl.Client;

First you can get a List of all your Team Projects containing Test Plans

(You need to know the URI of your TFS server)
public List<string> GetTeamProjects()
{
   var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(TestPlanQueryTool.Properties.Settings.Default.TfsUri));
      var service = tfs.GetService<VersionControlServer>();
      return service.GetAllTeamProjects(true).Select(i => i.Name).ToList();
}

I used the List of projects to populate a DropDown ComboBox.
On selecting a team project, I would....

Get a List of Test Plans for the Selected Team Project

public List<string> GetTestPlans(string teamProject)
{
    var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(
            new Uri(TestPlanQueryTool.Properties.Settings.Default.TfsUri));
    var service = tfs.GetService<ITestManagementService>();
    ITestManagementTeamProject teamProject = service.GetTeamProject(teamProject);
    return _teamProject.TestPlans.Query("SELECT * FROM TestPlan").Select (i => i.Name).ToList();
}

I populated a listview control with the list of Test Plans. Now I needed to query all the test suites contained in the test plan.



Wednesday, September 19, 2012

Automating a WCF Service Test using VSTS


I needed to veer off from testing our website UI using webtests to testing the service layer of our website. My client company has standardized on Visual Studio 2010/2012 for our test harness.

I had a couple of options: use the unit test framework or the webtest format, a declarative XML style. I’ll outline my thought process here, with credit to other sources including the VSTS Forum where this decision is discussed: Issue with VSTS webtest on WCF Servce.

I already have a bunch of test cases coded in the webtest format that I run using a custom test launcher and preferred to stick with the webtest style if possible.

The developer of the service expressed some concern about whether the webtest would be a good method of testing his new service, as he was most familiar with creating a service proxy and testing the WCF service by exercising properties and methods from that. He wanted me to evaluate calling the web methods from a VSTS webtest vs. a unit test by watching the http traffic using the Fiddler tool to make sure the service would be called in the same manner.

Fiddler Web Debugging Tool

Sure enough, my test harness proved that it didn’t matter whether I called the web service from a web test or a unit test, the http traffic recorded by Fiddler was identical. So for me, it was a no-brainer, I’d create the tests using the webtest format. If I had to summarize the pros and cons, it would go something like this:

Automating a WCF Service test

Unit Test Format
Pros
  • Easier to call the service after creating a web service proxy class
  • Neater, cleaner code when setting properties and calling methods of the proxy object
Cons
  • The service endpoint is written into a config file and is more work to parameterize.
  • The test project is compiled to a DLL unlike the declarative XML format of the webtest.
Webtest Format (pretty much the opposite of the above)
Pros
  • The test format is a readable declarative XML file format.
  • You can record a call to the WCF service using the wcftestclient.exe tool from VSTS, record the call in the Fiddler tool, and save the session in .webtest format very quickly.
  • It is very easy to parameterize any part of the service request URI, string body, and header using the concept of VSTS context name-value parameters.
Cons
  • Creating validation code may require you create a custom validation rule class to perform detailed validation of the service response packet.
It was the final point in favor of the webtest format that won me over -- we can easily point our service to a dev, test, stage, or prod environment, or change the input parameters on the method by changing some context paramter values from our custom test framework.

Monday, September 3, 2012

Running VSTS Webtests using MSTest.exe's /testcontainer option

I created a little window app that allows selecting one or more of our webtests and running the MSTest executable using .Net's ProcessStartInfo class in a separate thread created using the BackgroundWorker class.  I like using the BackgroundWorker class as it keeps multi-threading simple.

It took some trial and error plus some careful reading of MSDN's MSTest.exe Command-Line Options to figure out that I couldn't use the /testmetadata option. The reason was our team had created mulitple test lists with the same webtest appearing in several of the lists.  When I used this syntax:

     MSTest.exe /testmetada:mytests.vsmdi /test:webtest1.webtest

my tests got executed one time per test list!

That took some head scratching to figure out the workaround:  use the /testcontainer: option.  This syntax worked:

        MSTest.exe /testsettings:mysettings.testsettings /testcontainer:webtest1 /testcontainer:webtest2 /testcontainer:webtest3

I've used this syntax successfully even when starting 30 or more webtests in one command.

Sunday, August 26, 2012

Using Fiddler to create VSTS Web Performance Tests


My shop uses Visual Studio 2010 Ultimate Edition for our test framework. Frankly, I spent a few months automating test cases using the coded-UI style of browser automation before I decided that the “Web Performance Test” was better suited to our testing needs.  (I’ll refer to the web performance test as webtest in the rest of this post.)

Our website makes frequent AJAX calls for server-side user entry validation, and frankly, the recorded coded-UI tests would hang in a non-deterministic way. Even though I inserted custom retry logic when entering user values, the framework would hang and cause our team no end of frustration.

VSTS Web Performance Test (*.webtest)

On a happier note, I’ve had a lot of success using the webtest, a type of test built-in to VSTS which works at the http layer rather than through interaction with the web browser. In our case we use the webtest as a functional test to test the server-side code, though we also use sets of them in load tests as well.

That’s a little background on why I use webtests, but I want to comment more about using Fiddler. If you’re not familiar with Fiddler, it’s a free web debugging tool that logs all http and https traffic between your computer and the internet.

Export Fiddler Sessions when Visual Studio Webtest Recorder Can't

We soon discovered to our dismay that when recording our shopping cart wizard using a web performance test inside VSTS, that not all http requests (read AJAX) were captured. But happily Fiddler DOES capture all this traffic, and the bonus is that Fiddler allows you to export recorded sessions in VSTS web performance test (.webtest) format!
 

Screens in Fiddler after Choosing to Export Sessions


 

 

Sometimes we’ll create a hybrid recording, creating a webtest in VSTS. If some of the requests were not captured, we’ll capture the workflow in Fiddler and then export it to webtest.  After adding the Fiddler-exported webtest to our project, we cut/paste the requests we want into our main webtest.
We’ve had much success using Fiddler to help us build out a complete functional test suite of some fairly complex workflows and would have been stymied in doing this without this great tool.

Saturday, August 18, 2012

Which tests should be automated?

I don't know how many times at work I've had this topic come up. Typically a program manager (the one controlling the staffing) muses about how if we could just automate 100% of our testing then wouldn't everything be grand!  Think of the savings!

Of course this is an unreachable dream. Why can't 100% of tests be automated? Here are a few reasons that come to mind:

Ten Reasons Why 100% Test Automation is only a Dream

  1. There are an infinite number of possible test cases.
  2. We don't have enough trained staff to create the automated tests.
  3. There isn't enough time in the schedule to create the automation.
  4. The more tests created, the more time is needed for test updating and maintenance.
  5. The greater the number of tests in our test run, the more time is needed to analyze the test run results.
  6. Some test cases are too hard to automate, such as image or layout validation.
  7. The test automation team doesn't have comprehensive product detail knowledge.
  8. We don't have a comprehensive test plan.
  9. There is no detailed functional specification.
  10. Our test framework doesn't support manipulation of custom controls.
Well that is a bit of a discouraging list.  So at this point I need to decide how to prioritize our test automation effort. Here's how I usually approach the question of which tests to automate first.

Automate pri-1 "happy path" test cases first

We have a manual test team that knows the product better than the test automation engineers. If they have spent time preparing a test plan for manual test execution, great!  We steal some of that work and go through the documentation making a list of Pri-1 test cases. These are the most important "happy path" functions that the software or website supports, so we plan to automate these test cases first.

I talk to the manual testers and try to understand which test cases are the best candidates for automation. I key in on those cases that are simple to execute but involve a lot of repetitive effort, large forms to fill out, or generally mind numbing to repeatedly execute.  I have to use my experience to guage how much effort would be required to automate the test scenario. The best automation candidates are stable features that aren't frequently changed -- which would require updating the automation.

These pri-1 automated test cases also become excellent candidates for a regression test suite. I've been using Visual Studio recently and like to use both test lists and test categories for organizing our tests into sets that can be used for different purposes. I create a test list for "smoke tests" that includes only reliable deterministic tests that run fast and verify core functionality.

I create other test lists that include every test in our arsenal.  These we script to run during off-hours using Windows Task Scheduler because they will run for several hours.


 

Next automate pri-2 tests including negative test cases

If we have automated all the pri-1 cases we scan for secondary scenarios or important negative test cases. Another good source of automation cases is found in the bug reports. For example, we found that the developers had inserted a debug code into our website when a translated string was not found and the test team was finding these codes popping up all over the website. We automated a test that crawled the entire website looking for this code and saved our test team a ton of effort.

Our team rarely gets past automating a few pri-2 cases before the schedule moves us on to the next project.  What's your experience?