Archive for the ‘UnitTest’ Category.

Code coverage from Nant script using Gallio and nCover

I decided to go with MbUnit for a new project, the newest version 3.x comes bundled with Gallio. Gallio is a test runner and can run loads of different flavors of tests. nUnit, msTest, MbUnit, etc. Of course once you have your tests running you wonder how much of your code gets coverage. To figure that one out I added NCover, here is a sample of how you can have nCover cover your Gallio UnitTest runs.

For best results install Gallio on the build machine and point to that directory when you load the task
loadtasks assembly=
also make sure your list of assemblies to be covered is just the assembly name, not the file name.
assembly.list=myAssembly1;myAssembly1;etc

<!– Gallio –>
<target name=“galliounittest”
              description=“Runs MbUnit UnitTests using Gallio.” >

<echo message=“*** Start Gallio unittest: “/>

<!– Run tests –>
<loadtasks assembly=“${path.gallio.task}Gallio.NAntTasks.dll” />

<gallio
  result-property=“exitCode”
  failonerror=“false”
  runner-type=“NCover”
  report-types=“Html;Xml”
  report-directory=“${artifacts}”
  report-name-format=“gallioresults”
  show-reports=“false”
  application-base-directory=“${path.base.test}”
  >

  <runner-property value=“NCoverArguments=’//w ${path.base.test} //a ${assembly.list}'” />
  <runner-property value=“NCoverCoverageFile=’${path.ncover.dir}${coverage.xml.file}'” />
  <!– Specify the tests assemblies  –>
  <files>
    <include name=“${path.base.test}${assembly.test}”/>
  </files>
</gallio>
<fail if=“${exitCode != ‘0’}” >One or more tests failed. Please check the log for more details</fail>

<echo message=“*** End Gallio unittest: “/>
</target>

<!– NCover –>
<target name=“nCoverReport”
              description=“Creates UnitTest Coverage report.” >

<echo message=“*** Start nCoverReport: “/>

    <ncoverexplorer
      program=“${path.ncover.explorer.exe}”
      projectName=“${PojectName}”
      reportType=“ModuleClassFunctionSummary”
      outputDir=“${path.ncover.dir}”
      xmlReportName=“${coverage.xml.file}”
      htmlReportName=“${coverage.html.file}”
      showExcluded=“false”
      verbose=“True”
      satisfactoryCoverage=“1”
      failCombinedMinimum=“true”
      minimumCoverage=“0.0”>

      <fileset>
        <include name=“${path.ncover.dir}${coverage.xml.file}” />
      </fileset>
      <exclusions>
        <exclusion type=“Assembly” pattern=“*.Tests” />
        <exclusion type=“Namespace” pattern=“*.Tests*” />
      </exclusions>
    </ncoverexplorer>

<echo message=“*** End nCoverReport: “/>
</target>

NCover error

I ran into a problem when adding NCover to our UnitTests on a new project the error I got when using NCover with Gallio was the following, Profiled process terminated. Profiler connection not established. The FAQ that is in the NCover directory bundled with Gallio suggested – If using the command-line, did you COM register CoverLib.dll ?

Sure enough, just run
>regsvr32 CoverLib.dll
from a command prompt, that fixed the problem on both our build machines.

UnitTesting a door

I was explaining to somebody from non development world some time back what unittesting is about. At the time the best thing I could come up with was something along the way. Well if your building a house and then you install a door. Now you have to make sure the door works. In case of the door we write Unittest that makes sure that the door nob turns and does actually open the door. We also make sure that the door latches when it’s shut, further we make sure that the door fully opens and closes without any trouble. We make sure the key fits and can lock and unlock the door. Since we are at it we might as well shake the door and make sure nothing is rattling, that all screws were tightened up properly. Further if the door gets replaced it has to be verified that it works as well and the same key can be used as before.

The good part is once you have written the unittest it will work for the replacement door as well. This is where you get the most bang for you buck. Unless you never rewrite your software, that’s another story then. So let’s say your interior designer came up with a new door, a sliding door. Now your tests fail of course and they should, the function of the door has changed and it’s time to update the unittest. The base will stay the same the same key might still fit, the door should shut properly, etc. One of the new functions might be that the door slides open when a person walks into the sensor that triggers the door etc.

So how does it look when you need to unittest objects in your new webserver project. It’s similar but it’s a little different the problem is when you are running on a webserver you are running in a container. The container is the webserver. So logically you can’t test the door as the door can only be accessed when inside the container ( installed in the house ). The problem with that is that you do not want to unittest against a live webserver. As you can not get down to the object level to address the door directly. Therefor the trick is to setup a unittest environment that can fake the container / house. Once you do that your door thinks that it’s installed in a house and you can verify that the door functions as expected.

There is only one thing left to say, Jawoll Mein Agile Führer

TeamCity update to version 4.5.5

I Just updated our TeamCity environment to version 4.5.5 today, we were running 3.1 before. I had resisted an update as I always do because things just break when you update software. Since we are having some network issues it was a perfect time to take the plunge. That is after we discovered that TC 4.5+ can use Team Foundation Server repository. As the MsBuild just is not mature enough yet and there are weird quirks all around we are going to use combo TC and TFS setup for a new project we just started. Mainly running the build using nant and nunit of course.
Back to the update, I took a backup of the TC server directory just in case. Then fired up the install, the install recognized the old version and offered to get rid of it for me. The install ran fairly quickly and picked up all the configurations from the last install. After the server was up the build agents got pushed to the new version and came online in a matter of couple of minutes, now that is sweet. No manual installs on the build agent machines, its auto ! Then I just kicked off a build and everything was business as usual, it can’t get any better than this.

Saved by a mock

We had these unittest that were dependent on a server that the code interacts with over Tcp/ip they just didn’t work.  These tests would seldom run on the build machine and sometimes not at all.  It came to the point that I had disabled all tests that had anything to do with using the send / receive functionality.  The problem was that you had to make sure the simulation server was running first.  If it’s not running it had to be activated, if it hung it had to be shut down.  Then re-activated, sometimes when it hung the machine needed a reboot and so on.  Some of those things you had to learn with others you just had to reboot and hope things were ok.   After a reboot sometimes they were sometimes not.

I wrote a mock object to abstract the whole Tcp/ip layer and socket interaction.  The mock pretends to be the COM component and to do the Tcp/ip connection.  In reality it’s just a debug only class that will be invoked by the unittests.  The developer that wrote the initial version of the interaction with the COM component that does the Tcp/ip calls wrapped all the calls to the COM component.  The wrapper also keeps state of what the component is up to.  In other words if the program calls the wrapper to send a request the state is set to “Sent” etc.  The benefit of the wrapper is that when in test mode I can just activate the mock instead of the COM component.

The mock object is just a simple class pretending to do the Tcp/ip transport.  Instead after a send is called it will come back in a bit with an answer.  A little faster than the server would have in response to a request.  That’s fine as the client is already waiting asyncronously for the answer.  It was a simple coding and hook up, a good return on investment.  After the mock was in place and playing nice using one of the tests I activated all of the others.  We haven’t had a problem since, no reboots here and there when the tests don’t work because of send / receive problems.