Archive for the ‘UnitTest’ Category.

UnitTest MsSql database using Slacker, SlackerRunner

Our open source MsSql UnitTest framework Slacker and SlackerRunner was featured on Microsoft Channel 9. Now it’s easier than ever to UnitTest your database and add it into your CI,CD build pipeline.

Watch Eric Kang from Microsoft explain in detail what you need and how it works.

TFS 2015, Xunit transform to Trx

It turns out I was wrong in my last post about transforming nUnit tests to Trx ( Ms Test ) format for display in TFS 2015. It fails short as the nUnit format that xUnit spits out does not have any console or standard out traces. Therefor when you want to view more information about your tests or error traces on failures non are available. I went down the road to use the nUnit format because TFS was supposed to take it and display it in TFS without tranforming to Trx but that does not work. So the solution is to use the xUnit Xml V2 format by passing the -xml switch to xUnit. Then convert the xml to Trx using your Xslt transform. Mine is here on GitHub if you want a copy.

DNX .Net WebApi Integration testing

I needed to compare some gRPC round trip numbers to plain .Net WebApi. So the question became how to test WepApi round trips from UnitTest harness.

It’s actually pretty simple if you are using the newer DNX libraries. I’m using the TestHost library to get things done.

From project.json

“System.Net.Http”: “4.0.1-beta-23516”,
“Microsoft.AspNet.TestHost”: “1.0.0-rc1-final”,

The UnitTest itself

[Fact]
    public async void PingTheServerRepeat()
    {
      // Setup client & server
      TestServer server = new TestServer(TestServer.CreateBuilder().UseStartup<Startup>());
      HttpClient client = server.CreateClient();

      // Make a request and check response
      var getResponse = await client.GetAsync(“/api/values/5”);
      var response = await getResponse.Content.ReadAsStringAsync();
      Logger.Log(“web api, response=” + response);

      // Hit the webapi, repeatedly
      Stopwatch watch = Stopwatch.StartNew();
      for (int i = 0; i < Util.Repeat; i++)
      {
        getResponse = await client.GetAsync(“/api/values/” + i);
        response = await getResponse.Content.ReadAsStringAsync();
      }
      watch.Stop();
      Logger.Log($“WepApi, round trips={Util.Repeat}, Execution time, millisec={watch.ElapsedMilliseconds}, average roundtrip=” + ((double)watch.ElapsedMilliseconds / Util.Repeat));
      Logger.Log(“web api, response=” + response);

      // Done, release resources
      server.Dispose();
      client.Dispose();
    }

TFS 2015, Xunit, Nunit, transform to Trx

As TFS vNext currently doesn’t have NoShadowCopy option ( should be coming with the new DNX / 2016 ). I had to modify my new TFS build definition to use xUnit runner to run my tests and then have it output the result file in the optional xunit xml format ( -nunit option ). However true to form TFS 2015 doesn’t understand the Nunit xml format even though they have that as one of their options on the UnitTest results upload vNext task. The next part was to figure out how to transform the Nunit xml to TRX in order for TFS to be able to show the results on the dashboard. Using nxslt3 to transform and NUnitToMSTest.xlst transformation seems to be the way to go. However NUnitToMSTest.xlst doesn’t transform it correctly, at least not to the liking of VS2012, VS2015 or TFS 2015. So I had to modify the xlst slightly and then it will load in VS and the TFS dashboard. Below is the new version of NUnitToMSTest.xlst.

New version
NUnitToMSTest.xslt

msTest Initialization UnitTest Framework error – No types found implementing

I started getting the following error when running specific UnitTests on the server, and only on the server. Running the tests from Visual Studio both using msTest and TestDriven was fine. But obviously something was missing. The good part about using different runners to run tests is that you often find niche cases that you would have otherwise missed. I was able to duplicate the behavior using msTest from the command line on the development machine. Then I noticed that when msTest runs it creates a whole new directory for each run ( based on timestamp ) in order to run the tests in isolation. I started to realize my problem as the files needed for the test need to be copied to the new directory. As I was using reflection to load some of the dlls msTest would have no instructions that I needed those dlls to be copied over as well.

Initialization method Framework.Conventions.UnitTests.ConventionAdapterExtensionsTests.Initialize threw exception. Framework.FrameworkUsageException: Framework.FrameworkUsageException: No types found implementing IConventionAdapterProvider.

The trick is to declare the needed dlls as deployment items for your UnitTests test class and you are in business. In my case I needed these three dlls.

[DeploymentItem(“Framework.Compiler.dll”)]
[DeploymentItem(“Framework.Conventions.Compiler.dll”)]
[DeploymentItem(“Framework.Logging.dll”)]
[TestClass]
public class ConventionAdapterExtensionsTests
{

Selenium browser UnitTesting from TeamCity

I just setup browser testing framework utilizing Selenium from TeamCity project. The usual suspects are involved, Gallio, MBUnit, Nant and even C# Unittests.

The usual scenario when it comes to automating browser testing is that the QA / testers will create some scripts to run a browser against your website. Somehow those tests are usually not maintained very well and often are run by hand. There is not much value in browser testing scripts if you have to run them by hand.

As I needed browser testing on one of the projects I’m on I decided to look into using more of a automated setup to run the browser tests. There seem to be two big players Waitn and Selenium, Selenium lends itself to broader range of testing, naturally we will go with Selenium.

Here is the scenario I want, the tester installs a recorder on his computer, in this case a FireFox plugin. The tester records the tests and runs them in the browser using the plugin tool. Once the tester is happy with the tests the tester checks them into the repository. After check-in tester lets a developer know that there are new or changed tests. The developer takes the script and turns it into C# UnitTest, simply has Selenium convert it to UnitTest code. Then the developer takes and updates or adds the tests that resulted from the scripts and checks it into the repository. The conversion step could be automated in the future once Selenium supports that. The next step is to run it from TeamCity and after it runs you get email with the results.

So let’s take a closer look at what is needed. We need the UnitTest to be able to run against different servers using different browsers. We will pass values from TeamCity to the Nant script that is responsible for compiling and running the tests. This is how your test C# configuration file might look like.

<!– Selenium RC properties–>
    <add key=“SeleniumAddress” value=“localhost” />
    <add key=“SeleniumPort” value=“4444” />
    <add key=“SeleniumSpeed” value=“0” />
   
    <!– Browser targets –>
    <add key=“BrowserType” value=“*firefox” />
    <add key=“BrowserUrl” value=“http://10.9.169.198/” />
    <add key=“BaseUrlPath” value=“IPCA.Dev/” />

Then the base test class will look something like this.

[FixtureSetUp]
        public virtual void TestFixtureSetup()
        {
            // Read from config
            msBrowserType = getConfigSetting(“BrowserType”, msBrowserType );
            msBrowserUrl = getConfigSetting(“BrowserUrl”, msBrowserUrl);
            msBasePath = getConfigSetting(“BaseUrlPath”, msBasePath);
            //
            msSeleniumAddress = getConfigSetting( “SeleniumAddress”, msSeleniumAddress );
            miSeleniumPort = int.Parse(getConfigSetting(“SeleniumPort”, miSeleniumPort.ToString()) );
            msSeleniumSpeed = getConfigSetting( “SeleniumSpeed”, msSeleniumSpeed );
           
           
            // Start up the selenium session, using config values
            selenium = new DefaultSelenium(msSeleniumAddress, miSeleniumPort, msBrowserType, msBrowserUrl);
            selenium.Start();
            // Clean errors
            verificationErrors = new StringBuilder();

            // sets the speed of execution of GUI commands
            selenium.SetSpeed(msSeleniumSpeed);
        }

        [TearDown]
        public void TeardownTest()
        {
            try
            {
                selenium.Stop();
            }
            catch (Exception)
            {
                // Ignore errors if unable to close the browser
            }
            Assert.AreEqual(“”, verificationErrors.ToString());
        }

And a sample Selenium C# Unittest

//
using System;
using System.Text;
using System.Text.RegularExpressions;
using System.Threading;
//
using Gallio.Framework.Assertions;
using MbUnit.Framework;
//
using Selenium;

namespace SeleniumTests
{
    [TestFixture]
    public class LoginPage : WebTestBase
    {

        [Test]
        public void TheLoginPageTest()
        {
            selenium.Open( this.msBasePath + “TestLogin.aspx”);
            selenium.Click(“lbAdmin”);
            selenium.WaitForPageToLoad(“50000”);
            selenium.Click(“loginLink”);
            selenium.WaitForPageToLoad(“50000”);
            try
            {
                Assert.IsTrue(selenium.IsTextPresent(“my responsibilities regarding permissible access”));
            }
            catch (AssertionException e)
            {
                verificationErrors.Append(e.Message);
            }
            selenium.Click(“ctl00_pageContent_btnSubmit”);
            selenium.WaitForPageToLoad(“50000”);
            try
            {
                Assert.IsTrue(selenium.IsTextPresent(“Total Unassigned Web”));
            }
            catch (AssertionException e)
            {
                verificationErrors.Append(e.Message);
            }
        }
    }
}

After the Nant script compiles the tests and is getting ready to run the UnitTests it needs to startup the Selenium engine. Make sure to spawn in order for the Selenium engine to exist on another thread than your tests.

<property name=“SeleniumExec” value=“java” />
  <property name=“SeleniumPath” value=“C:\apps\selenium\selenium-server-1.0.3\” />
  <property name=”SeleniumParams” value=”-jar ${SeleniumPath}selenium-server.jar” />

    <!– Start selenium –>
    <exec   program=”${SeleniumExec}
      commandline=”
${SeleniumParams}” workingdir=”${path.base}${WebTest}
      spawn=”
true” failonerror=”true” verbose=”true
       />

    <!– Give it a sec to load –>
    <sleep milliseconds=”3000” />

In order to run the tests using different browsers, change the configuration file of the tests before run.

<!– Run tests in Firefox browser –>
    <xmlpoke
        file=“${path.base.test}${assembly.test.config}”
        xpath=“/configuration/appSettings/add[@key=’BrowserType’]/@value”
        value=“*firefox”
        verbose=“true”/>

    <call target=“runTests” />

<target name=“runTests”
    description=“runs tests using Gallio.” >

   
    <echo message=“*** Start runTests: “/>

      <gallio
        result-property=“exitCode”
        failonerror=“false”
        report-types=“Html;Xml”
        report-directory=“${artifacts}”
        report-name-format=“gallioresults”
        show-reports=“false”
        application-base-directory=“${path.base.test}”
            >

          <!– Specify the tests assemblies  –>
          <files>
            <include name=“${path.base.test}${assembly.test}”/>
          </files>
        </gallio>

        <!–
            Set error for email injector to pick it up and GlobalFailBuildMessage for
            the end target to fail the build after cleanup
          –>

        <if test=“${int::parse(exitCode)!=0}”>
          <property name=“GlobalFailBuildMessage” value=“*** One or more tests failed. Please check the log for more details” dynamic=“true” />
          <echo message=“EmailInjectMsg=${GlobalFailBuildMessage}” />
        </if>

    <echo message=“*** End runTests: “/>
  </target>

And after the run of the Unittests Selenium needs to be shut down

<!– Stop Selenium server –>
      <get  src=“http://localhost:4444/selenium-server/driver/?cmd=shutDownSeleniumServer”
        dest=“shutdown.txt”  failonerror=“false”
      />

As I had a need to setup different configurations in TeamCity to run against different locations on the webserver I used a couple of Nant variables that are passed on the command line from TeamCity, like you normally would do when running Nant script -D:BaseUrlPath=/Test/ etc.

<echo message=“*** Location variables passed from TeamCity”/>
    <echo message=“*** BrowserUrl=${BrowserUrl} “/>
    <echo message=“*** BaseUrlPath=${BaseUrlPath} “/>

Of course you get the Gallio UnitTest report as well

gallio_report

With this setup once we deploy to a server we can run all the browser tests on it using different browsers with one click of a button from TeamCity.